[Samba] 1MB/s gigabit transfers on dell poweredge
John H Terpstra - Samba Team
jht at samba.org
Sat Mar 14 01:53:11 GMT 2009
godwin at acp-online.co.uk wrote:
> John, thanks once again for the quick reply.
>
... [snip]...
> I am eager to understand how you are getting 50MB/s on samba transfers as
> considering the overheads added by the samba protocol, you should be
> getting 60-75 MB/s using scp/rsync/nfs.
My home network running Samba-3.3.1 over 1 Gigabit ethernet transfers up
to 90 Mbytes/sec. The file system is ext3 on a Multidisk RAID5 array
that consists of 4x1TB Seagate SATA drives.
When transferring lots of small files the rate drops dramatically. When
I run rsync over the same link between the same systems the transfer
rate for large files hovers around 55 Mbytes/sec and drops to as low as
1.5 Mbytes/sec when it hits directories with lots of small files (<100
Kbytes).
> I am also working on a COTS storage solution using lustre and Samba CTDB.
> The aim is to provide 1000MB/s to clients (100MB/s to each client) so they
Have been working on two Samba-cTDB cluster installations. One of these
is based on RHEL and has Samba-cTDB on a front-end cluster that sits on
top of RHCS, over a GFS2 file system, over LVM, over iSCSI. The
back-end consists of two systems that each have 32TB of data that is
mirrored using DRBD. The DRBD nodes are exported as iSCSI targets.
So far with 2 active front-end nodes (each has 8 CPU cores) and running
the NetBench workload using smbtorture, the highest peak I/O I have seen
is 58 MBytes/sec. The iSCSI framework is using bonded multiple 1
Gigabit ethernet adaptors and the cluster front-end also uses multiple 1
Gigabit ethernet.
I would love to find a way to get some more speed out of the cluster,
and hence if you can meet your 100 Mbtes/sec objective I'd love to know
how you did that!
PS: Using Samba-3.3.1 with CTDB 1.0.70.
> can edit video online. The solution also needs to be scalable in terms of
> IO and storage capacity and built out of open source components and COTS
> so there is no vendor lock in. Initial tests on lustre using standard dell
> desktop hw are very good. However I need samba ctdb to communicate with
The moment you introduce global file locking I believe you will see a
sharp decline in throughput. Clusters are good for availability and
reliability, but throughput is a bit elusive.
> the clients as they are Apple macs. I havent reached samba ctdb
> configuration yet. But the gigabit ethernet issue had me scared till I
> received your reply. Now I see a lot of hope ;-).
>
> Once again thanks for your help and do let me know if I can reciprocate
> your kindness.
>
> Cheers,
> Godwin Monis
>
>
>>> Now, if its not asking for too much, can you let me know
>>> 1. the network chipsets used on your server and client
>>>
>> Main servers
>> fileserv ~ # lspci | grep Giga
>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>> Gigabit Ethernet (rev 03)
>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>> Gigabit Ethernet (rev 03)
NICs on the cluster servers I am working with are:
nVidia Corporation MCP55 Ethernet - dual ports on mobos
Intel Corporation 82571EB Gigabit Ethernet Controller - quad port
>> dev6 ~ # lspci | grep Giga
>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>> Gigabit Ethernet (rev 03)
>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>> Gigabit Ethernet (rev 03)
>>
>> datastore0 ~ # lspci | grep net
>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>
>> datastore1 ~ # lspci | grep net
>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>
>> datastore2 ~ # lspci | grep net
>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
I hope this is useful info for you. Let me know if I can assist you in
any way.
Cheers,
John T.
More information about the samba
mailing list