[Samba] 1MB/s gigabit transfers on dell poweredge

godwin at acp-online.co.uk godwin at acp-online.co.uk
Sat Mar 14 03:22:30 GMT 2009


Ok I am dealing with two John's here :)

John Terpstra,

Thanks for your reply.

whoa ... 90 Mbytes/sec.... I would give an arm to get that kind of
throughput. Right now I am getting a little over 1% of that throughput :-D

Can you tell me what nic's, switch and cabling are you using in your setup?

I am using lustre as the backend to samba CTDB. I dont know how much you
know about lustre so I wont bore you with a sermon :-). (I am not a lustre
expert myself - not yet ;-) )

But my initial tests of lustre do give promising results. Using a native
lustre client on linux, I can get high throughputs. The only limitation
being the network interconnect on the client side. Using a 4 port PCI-E
Intel EEPRO's bonded togeather should give me 30x4=120MB/s (in a worst
case scenario considering inferior eqpt)

As I mentioned in my last post, The issue is whether samba ctdb can scale
to that bandwidth. Right now on a standard samba install on debian etch I
am trying to get more than 1.4 MB/s (NFS etc works fine at 50MB/s).

Its only when I resolve this can I look at ctdb with lustre.

If I had the luxury of using native lustre client, I am sure of meeting or
exceeding the 100MB/s objective. Unfortunately Mac's dont have a native
lustre client yet. Of course the whole system will still have to be
carefully handbuilt and tuned with each component whether its the
nic,Motherboard,switch, cabling etc; all chosen to work optimally with
each other.

I guess my current speed issue is related to the samba version. I am using
3.0.24 that comes standard with debian etch. I need to upgrade. I will be
testing out various updated versions of Samba today.

I will keep the list posted with results of my samba-ctdb, lustre trials
as and when they are conducted as well as my current samba speed issue.

Thanks,
Godwin Monis


> godwin at acp-online.co.uk wrote:
>> John, thanks once again for the quick reply.
>>
> ... [snip]...
>
>> I am eager to understand how you are getting 50MB/s on samba transfers
>> as
>> considering the overheads added by the samba protocol, you should be
>> getting 60-75 MB/s using scp/rsync/nfs.
>
> My home network running Samba-3.3.1 over 1 Gigabit ethernet transfers up
> to 90 Mbytes/sec.  The file system is ext3 on a Multidisk RAID5 array
> that consists of 4x1TB Seagate SATA drives.
>
> When transferring lots of small files the rate drops dramatically.  When
> I run rsync over the same link between the same systems the transfer
> rate for large files hovers around 55 Mbytes/sec and drops to as low as
> 1.5 Mbytes/sec when it hits directories with lots of small files (<100
> Kbytes).
>
>> I am also working on a COTS storage solution using lustre and Samba
>> CTDB.
>> The aim is to provide 1000MB/s to clients (100MB/s to each client) so
>> they
>
> Have been working on two Samba-cTDB cluster installations. One of these
> is based on RHEL and has Samba-cTDB on a front-end cluster that sits on
> top of RHCS, over a GFS2 file system, over LVM, over iSCSI.  The
> back-end consists of two systems that each have 32TB of data that is
> mirrored using DRBD.  The DRBD nodes are exported as iSCSI targets.
>
> So far with 2 active front-end nodes (each has 8 CPU cores) and running
> the NetBench workload using smbtorture, the highest peak I/O I have seen
> is 58 MBytes/sec.  The iSCSI framework is using bonded multiple 1
> Gigabit ethernet adaptors and the cluster front-end also uses multiple 1
> Gigabit ethernet.
>
> I would love to find a way to get some more speed out of the cluster,
> and hence if you can meet your 100 Mbtes/sec objective I'd love to know
> how you did that!
>
> PS: Using Samba-3.3.1 with CTDB 1.0.70.
>
>> can edit video online. The solution also needs to be scalable in terms
>> of
>> IO and storage capacity and built out of open source components and COTS
>> so there is no vendor lock in. Initial tests on lustre using standard
>> dell
>> desktop hw are very good. However I need samba ctdb to communicate with
>
> The moment you introduce global file locking I believe you will see a
> sharp decline in throughput.  Clusters are good for availability and
> reliability, but throughput is a bit elusive.
>
>> the clients as they are Apple macs. I havent reached samba ctdb
>> configuration yet. But the gigabit ethernet issue had me scared till I
>> received your reply. Now I see a lot of hope ;-).
>>
>> Once again thanks for your help and do let me know if I can reciprocate
>> your kindness.
>>
>> Cheers,
>> Godwin Monis
>>
>>
>>>> Now, if its not asking for too much, can you let me know
>>>> 1. the network chipsets used on your server and client
>>>>
>>> Main servers
>>> fileserv ~ # lspci | grep Giga
>>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>> Gigabit Ethernet (rev 03)
>>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>> Gigabit Ethernet (rev 03)
>
> NICs on the cluster servers I am working with are:
>
> nVidia Corporation MCP55 Ethernet - dual ports on mobos
> Intel Corporation 82571EB Gigabit Ethernet Controller - quad port
>
>>> dev6 ~ # lspci | grep Giga
>>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>> Gigabit Ethernet (rev 03)
>>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>> Gigabit Ethernet (rev 03)
>>>
>>> datastore0 ~ # lspci | grep net
>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>
>>> datastore1 ~ # lspci | grep net
>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>
>>> datastore2 ~ # lspci | grep net
>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>
> I hope this is useful info for you. Let me know if I can assist you in
> any way.
>
> Cheers,
> John T.
>
>




More information about the samba mailing list