[Samba] Maximum samba file transfer speed on gigabit...

bjquinn at seidal.com bjquinn at seidal.com
Thu Jun 1 01:43:37 GMT 2006


Ok so maybe someone can explain this to me.  I've been banging my head
against the wall on this one for several weeks now and the powers that be
are starting to get a little impatient.  What we've got is an old FoxPro
application, the FoxPro .dbf's being stored on a Linux fileserver using
Samba (Fedora 3 currently, using Fedora 5 on the new test server).  We're
having speed problems (don't mention the fact that we should be using a
real SQL server - I know, I know).  So I'm thinking what I need to do is
to increase the speed at which the server can distribute those .dbf files
across the network.  We'd been getting somewhere between 10-20 MB/s,
depending on file size, etc.  We've already got a gigabit network.  So,
I'm thinking to myself, "a gigabit is 125 MB/s, so we should be going a
LOT faster".  Ok, so I know it's only really about 119 MB/s (darn 1000 B =
1KB vs 1024 B = 1KB marketing crap).  Whatever.  That's a lot faster than
10-20 MB/s.  I've got a bottleneck, I tell myself.  The hard drive light
on the old server is blood red all the time and top reports high (~10-40%)
iowait.  Must be the hard drive.  So we upgrade from 2x 10K RPM SATA
1.5Gbps drives in RAID-0 to 4x 15K RPM SAS 3.0Gbps drives in RAID-10. 
That should do it.  Nope.  No difference, no change whatsoever (that was
an expensive mistake).  Then it must be the network card is the
bottleneck.  So we get PCI-E Gigabit NICs, I learn all about rmem and wmem
and tcp window sizes, set a bunch of those settings (rmem & wmem =
25000000, tcp window size on Windows = 262800 as well as so_sndbuf,
so_rcvbuf, max xmit, and read size in smb.conf = 262800), still no change.
 No change!  I can run 944 Mb/s or higher in iperf.  Why can't I even get
a FRACTION of that transferring files through Samba?  I mean, hard drive
speed shouldn't be the issue - a single one of these SAS drives is
supposed to sustain 90+ MB/s, and I have four of them raided together. 
The NICs are testing out at nearly 1Gb/s.  Is there REALLY that much
overhead for Samba?  Isn't there something I can do to increase the
efficiency of the file transfer speeds?  It doesn't seem to matter which
settings I use in Samba, the best I ever get is about 22 MB/s, and it
sometimes bogs down to around 12 MB/s.  Assuming nothing else is the
bottleneck, that's about 100 Mb/s - 175 Mb/s, or 10-18% of the theoretical
limit of gigabit ethernet.  The Windows clients never write the data
received over the network to the hard drive, it loads it up into memory,
which should be fairly fast, as are all the clients - 2.8+ GHz, 800MHz
FSB, 10K RPM SATA drives, etc.  Besides that, these fast SATA drives ought
to be able to write more than 10-15 MB/s for a file transfer anyway.  What
am I missing here?  Is the overhead for Samba really that significant, or
is there some setting I can change, or am I overlooking something else?

Thanks for your help, and maybe you guys can spare my head any more injury
from the banging it has been getting over the past few weeks.

-BJ Quinn


More information about the samba mailing list