[Samba] filesystem of choice? (app-dependant, but I prefer xfs for larger files)
Linda Walsh
samba at tlinx.org
Fri Jun 24 17:54:21 MDT 2011
John Drescher wrote:
>> � � � �I would use 'xfs'. �I believe samba was originally developed
>> over xfs, so it's likely the ea-suppot and acl support has had the most
>> testing there. �Especially if your file server is setup with a UPS, then I'd
>> strongly recommend it. � If not, ext4 might be safer (with write
>> through). � It will be slower, but safer.
>>
>> � � � �With a UPS, XFS's default 'write-back', will give the fastest
>> performance for large file writes (I think reads as well). � It's worst
>> performance is on "removing" large numbers of files, as that is pretty
>> much a �synchronous operation...
>
> I would just use ext4, it does not have the ext3 large file slowness
> or xfs slowdown with lots of small files.
>
> John
----
xfs doesn't have much of a slowdown with small files
other than in deleting them. That said, it *was* optimized
for people wanting to stream media (multiple channels) in real
time... It was designed to excel with large file I/O. So it's
possible benchmarks may show some small advantages in small
file I/O, (outside of deletes), but most of those problems can
be ameliorated or eliminated if you are on good hardware (UPS
backedup, any RAID's w/battery backed up cache) -- then
you might also improve performance by turning on/of write barriers
depending on your HW.
XFS should also be tuned for RAID stripe size for
optimal performance and give a large Metadata area when creating
it (128M) or "32768b" (b=4k blocks);
@mount time, optimal speed options that I use include
defaults,noatime,swalloc,largeio,logbsize=256
(and possibly nobarriers depending on hw)...
But it really depends on your HW and your usage.
If you don't need fast file read/write on large files my large
array with 2 striped, 6,7.2k-SATA-disk RAID5's (a 'RAID50'), gets 1GB/s
read/write on large I/O's....
Speeds are comparable to raw device access. Usually,
for large reads/writes, using *direct access*, is 15-20% faster than
going through the linux-file buffers (for I/O's that exceed my system's memory size, thus making the cache effectively useless). you still
get all the overhead of fs-cache management, but no benefit when moving
around files larger than sysmem. That overhead may make not
make much difference with a single 7.2k sata with top xfer rate of 120-140MB/s (2-3TB), but as you up the data rate, the overhead becomes
more significant.
I have not benched xfs against ext4, but when I benched it
against ext3, it was faster in all tests except "large# (>500-1000 files at a time) file-deletions".
BTRFS looks promising, but I, _personally_, think it
not quite ready for production systems.
I'm sure ext4 has improved much, and excels in some benchmarks, just
as xfs excels in some -- it would depend on user usage. Of course
xfs has been around since ... um...the mid 90's...so it has been
fairly well tested...(though the port on linux is always 'ongoing' due
to new kernel interfaces and ongoing xfs performance optimizations)...
-- but that's a measurement
specific to my I/O rate and somewhat on my CPUs' speeds (2x2.67MHz Xeon
w/4 Core's ea).
More information about the samba
mailing list