[Samba] Transfer rates faster than 23MBps?

Mark Smith msmith at customflix.com
Fri Sep 22 17:15:46 GMT 2006


Doug VanLeuven wrote:
> OK, I'll top post.
> I can't let this stand unanswered.
> I ran a LOT of tests with gigabit copper and windows machines.  I never 
> did better than 40 seconds per gig.  That was with the Intel cards 
> configured for maximum cpu utilization.  80-90% cpu for 40 sec per gig.  
> On windows.  Uploads went half as fast.  Asymetric.  Of course I only 
> had 32 bit PCI, 2.5Gig processor motherboards with 45MBps drives.
> 
> Which leads me to my point.  One can't rationally compare performance of 
> gigabit ethernet without talking about hardware on the platforms.  I 
> wouldn't think you'd have overlooked this, but one can bump up against 
> the speed of the disk drive.  Raid has overhead.  Have you tried 
> something like iostat?  Serial ATA?  I seem to recall the folks at 
> Enterasys indicating 300Gbps as a practical upper limit on copper gig.  
> Are you using fiber?  64 bit PCI?  Who made which model of the network 
> card?  Is it a network card that's well supported in Linux?  Can you 
> change the interrupt utilization of the card?  What's the CPU 
> utilization on the Redhat machine during transfers?
 >
 > I don't have specific answers for your questions, but one can't just say
 > this software product is slower on gigabit than the other one without
 > talking hardware at the same time.

You have a very good point:  I never indicated what my hardware 
situation was.

Server: Rackable UltraDense.  It's an Opteron 250, 2GB RAM, a 3Ware RAID 
controller and 12x 500GB SATA disks (about 460GB formatted) in 2x 6 disk 
RAID5 arrays (a little space wasted due to a 2TB limit somewhere.) 
Ethernet is a BroadCom BCM85702A20 gigabit (two of them, actually, but 
we're only using one.)

I've used a number of different clients, ranging from a Dell 850 copying 
to /dev/null, to a Dell OptiPlex GX620 copying to a local SATA drive, to 
another Rackable UltraDense.  Both Linux and WinXP.  (Not so 
surprisingly, the Linux client is slower than the WinXP client. 
Although, using smbclient (as Jeremy suggested) was just as fast as the 
WinXP client, our famous 45 second 1GB transfer.)

Reasons I didn't list hardware in my first email:
- iPerf shows that I can saturate the Ethernet interfaces, TCP/IP stack, 
and switching fabric to 120MBps, 960Mbps.
- Copying the same file to/from the same machines using HTTP (Apache2) 
transfers at about 60MBps, 480Mbps.  This uses the same disk and network 
subsystems.
- Copying a 1GB file from a RAM disk on the server to /dev/null on the 
client (eliminating disk performance from the equation entirely) does 
_NOT_ speed things up at all, still stuck at about 45 seconds, about 
23MBps, 182Mbps.
- Copying locally from the disk to /dev/null (using dd, no network at 
all) takes about 17 seconds for a 1GB file, which matches up nicely with 
the 60MBps, 480Mbps seen with HTTP.

Given these tests, I would expect to see transfer rates of up to 60MBps 
in the best case.  Admittedly, that is a _BEST_ case.  I know I can't 
avoid that bottle neck, and honestly, that would be totally sufficient 
for our use.

The question is, what bottle neck am I hitting now?  The only thing that 
changes between the HTTP and SMB tests are the transport mechanisms (and 
their interactions with other systems, eg: kernel), so naturally I 
suspect those.  For the time being, at least, I need to use the SMB 
protocol.  So I'm trying to figure out what I can tweak, if anything, to 
make this go faster.

As a data point, I'm going to try a newer version of Samba.  (RHEL4 uses 
3.0.10-RedHat-Heavily-Modified-Of-Course)  If that makes a difference, 
then I have to decide whether it's worth it to me to keep RedHat support 
or not.  (And when I say "I," I really mean "my management.")

> I have lots of memory.  I use these configurations in sysctl.conf to up 
> the performance of send/recieve windows on my systems.  There's articles 
> out there.  I don't have historical references handy.
> YMMV.
> net.core.wmem_max = 1048576
> net.core.rmem_max = 1048576
> net.ipv4.tcp_wmem = 4096 65536 1048575
> net.ipv4.tcp_rmem = 4096 524288 1048575
> net.ipv4.tcp_window_scaling = 1

I have not tried tweaking the TCP stack in the OS.  I'll give that a shot.

Thank you very much, Doug.

-Mark


More information about the samba mailing list