[Samba] Expected transfer rate of samba, SATA over gigabit to SATA

John H Terpstra jht at samba.org
Tue Mar 25 23:01:38 GMT 2008


On Tuesday 25 March 2008 04:47:25 pm darlingm wrote:
> What is a good transfer rate to expect when writing from Windows XP/Vista
> to a samba network share in the following situation?
>
> * New client.  Intel Core 2 Duo 6600 with 4GB memory, and Intel gigabit
> NIC. * New server.  HP XW8400 with (currently) a single 5335 Xeon with 4GB
> memory, and integrated Intel gigabit.  Running samba 3.0.25b, as included
> with CentOS (RHEL).
> * Netgear GS116 (gigabit) switch
>
> Using TTCP (on Windows PCATTCP) I get around 406Mbit/sec (50.75MB/sec)
> between the new client and new server. TTCP transfers random data between
> two computers, to purely test the network transfer rate -- not using either
> computer's hard drive whatsoever. This seems about right. From what I've
> read elsewhere, it appears gigabit typically performs at about 40% of it's
> theoretical transfer rate of 1000Mbit/sec unless very expensive switches
> and NICs are purchased.
>
> Using hdparm, I get around 640Mbit/sec (80MB/sec) on both the client's SATA
> drive and the new server's SATA drive.
>
> Transferring a 700MB AVI file from Windows to the samba network share runs
> between 75Mbit/sec to 280Mbit/sec (9.4MB/sec to 35MB/sec), ranging wildly.
> I am not sure why some of the files sometimes transfer so much slower than
> others. I have all other network traffic stopped, and am transferring
> different AVI files each time to eliminate a possibility of the client
> machine caching the file in memory -- since I have noticed that
> transferring a file a second time right after the first time often runs
> much faster.
>
> I set up vsftpd as a temporary test, and transferring 700MB AVI files from
> Windows to the new server by ftp runs between 67Mbit/sec and 115Mbit/sec
> (8.4MB/sec to 14.4MB/sec).
>
>
> Transfers to my old server of large files were often around 92Mbit/sec
> (11.5MB/sec). I was hoping to see a huge increase moving to the new server,
> because the old server ran a slow RAID5 of IDE drives, which hdparm only
> showed a 256Mbit/sec (32MB/sec) read rate from -- and the new server runs
> (currently) a single SATA drive which hdparm shows a 640Mbit/sec (80MB/sec)
> read rate from, just like the client's hdparm results on a SATA drive.
>
>
> One of the new server's functions is to provide a backup service for
> customer machines. We copy the entire contents of a customer's hard drive
> to this backup volume temporarily. Should I be expecting to get (for large
> files) greater transfer rates than I am getting, or to go faster do I need
> to install a hot swap bay into the new server to transfer a customer's hard
> drive into?
> --
> View this message in context:
> http://www.nabble.com/Expected-transfer-rate-of-samba%2C-SATA-over-gigabit-
>to-SATA-tp16261586p16261586.html Sent from the Samba - General mailing list
> archive at Nabble.com.

The transfer rate you will get depends on any factors.  Your findings are not 
unusual - so you can put your mind at ease.  On the other hand, you would 
possibly like much higher transfer rates - but that takes quite a bit of 
effort.

At the Samba end, the configuration as set in smb.conf plays a role in overall 
file transfer performance.  For example, high loglevels can kill throughput 
performance. The quality of the NICs, cabling and switches (or HUBs) can also 
have a significant impact on file transfer rates.

Also, it may be a good exercise to install MS Windows (XP or 2003 Server) on 
the machine that is currently running Samba on Centos.  This will give an 
indication of how far off your overall configuration may be.

Another point to keep in mind is that a single Windows client running off a 
Samba server is not capable of demonstrating the power of a server that has a 
Quad-core CPU.  Also, in recent benchmarking work I have done I found the 
quality of the NIC driver at the Linux end to substantially impact achievable 
I/O bandwidth utilization.

Using a 1Gbe NICs (in the client and server) and one Windows XP client it was 
a challenge to get more than 27 MBytes/sec from a 4 CPU core server.  Another 
NIC in the same server quickly got that up to 63 MBytes/sec. This transfer 
was measured running from a RAID array on the server to a RAM disk on the 
Windows workstation and transferring a 4.3 GByte file.

Recently I did a load of lab work evaluating different server configurations 
and tried several 10Gbe NICs, eliminated the disk subsystem by switching to 
RAM disks to get up to 450 MBytes/sec (about 3.8 Gbits/sec).  The slowest 
10Gbe NICs gave just over 100 MByes/sec. These rates were measured using 
smbtorture with the NetBench workload on a target system with 4 CPU cores and 
32GB RAM. The load-client was another system with 4 CPU cores and 8 GB RAM 
running Linux.  Single client process produced approx. 55 MBytes/sec, and it 
took between 10-25 client processes to achieve peak server I/O (depending on 
server configuration).

Now when it comes to disk I/O, you might give a thought to eliminating the 
disk subsystems by measuring Samba performance from a RAM disk.  You may find 
a significant improvement in I/O that way, and that could point to the SATA 
subsystem as a potential bottleneck.

What sort of server CPU core loads were you seeing during your tests? 

Have you considered creating a test environment that can simuate an office of 
many PCs?

Lastly, running a simple protocol such as FTP does little to stress out the 
physical network layer like the SMB/CIFS protocols can do.  It therefore 
means little to observe great FTP or TTCP performance, but poor SMB/CIFS 
throughput. You really need to consider the total environment.

I hope this helps.

Cheers,
John T.


More information about the samba mailing list