[Samba] 1MB/s gigabit transfers on dell poweredge

John H Terpstra - Samba Team jht at samba.org
Sat Mar 14 03:55:52 GMT 2009


godwin at acp-online.co.uk wrote:
> Ok I am dealing with two John's here :)
> 
> John Terpstra,
> 
> Thanks for your reply.
> 
> whoa ... 90 Mbytes/sec.... I would give an arm to get that kind of
> throughput. Right now I am getting a little over 1% of that throughput :-D
> 
> Can you tell me what nic's, switch and cabling are you using in your setup?

I gave you the NICs in the cTDB cluster (shown below).

My home network has the following NICs:

datastore:
01:00.0 Ethernet controller: Attansic Technology Corp. L1 Gigabit
Ethernet Adapter (rev b0)

Windows client:
Realtec RTL8168/8111 PCI-Express Gigabit Ethernet NIC (Built into ASUS
M2A-VM mobo)

Linux client:
02:00.0 Ethernet controller: Attansic Technology Corp. L1 Gigabit
Ethernet Adapter (rev b0)  (Built into ASUS P5QC mobo)

Cat5 cabling - home-grown.

Switch:
D-Link 1 Gigabit Model DGS-2208


> I am using lustre as the backend to samba CTDB. I dont know how much you
> know about lustre so I wont bore you with a sermon :-). (I am not a lustre
> expert myself - not yet ;-) )

I'm always ready to listen to an expert talk about something that
excites him/her.

> But my initial tests of lustre do give promising results. Using a native
> lustre client on linux, I can get high throughputs. The only limitation
> being the network interconnect on the client side. Using a 4 port PCI-E
> Intel EEPRO's bonded togeather should give me 30x4=120MB/s (in a worst
> case scenario considering inferior eqpt)

Using the bonded Intel quad-port NIC (Intel Corporation 82571EB Gigabit
Ethernet Controller) over two bonded ports I've seen rates of 90
MBytes/sec aggregate in each direction.  That is doing DRBD sync of two
block volumes each way.  i.e.:  2 x 4TB from machine A to B, and 2 x 4TB
from machine B to A - both running at the same time.

But I would not expect to see that high a throughput over a Samba
cluster.  Your mileage might vary!

> As I mentioned in my last post, The issue is whether samba ctdb can scale
> to that bandwidth. Right now on a standard samba install on debian etch I
> am trying to get more than 1.4 MB/s (NFS etc works fine at 50MB/s).

This is not a Samba problem.  You have a hardware issue of some sort.

> Its only when I resolve this can I look at ctdb with lustre.

I'd like to be kept updated on your progress with this.

> If I had the luxury of using native lustre client, I am sure of meeting or
> exceeding the 100MB/s objective. Unfortunately Mac's dont have a native
> lustre client yet. Of course the whole system will still have to be
> carefully handbuilt and tuned with each component whether its the
> nic,Motherboard,switch, cabling etc; all chosen to work optimally with
> each other.
> 
> I guess my current speed issue is related to the samba version. 

Don't guess - prove it. :-)  There have been way too many such
suspicions on this list.  It's time this got put to bed.

> I am using 3.0.24 that comes standard with debian etch. I need to upgrade. 
> I will be testing out various updated versions of Samba today.
> 
> I will keep the list posted with results of my samba-ctdb, lustre trials
> as and when they are conducted as well as my current samba speed issue.

Please email me (off-list) your smb.conf configuration, and your CTDB
config files.  I'd like to compare notes.

Cheers,
John T.

> Thanks,
> Godwin Monis
> 
> 
>> godwin at acp-online.co.uk wrote:
>>> John, thanks once again for the quick reply.
>>>
>> ... [snip]...
>>
>>> I am eager to understand how you are getting 50MB/s on samba transfers
>>> as
>>> considering the overheads added by the samba protocol, you should be
>>> getting 60-75 MB/s using scp/rsync/nfs.
>> My home network running Samba-3.3.1 over 1 Gigabit ethernet transfers up
>> to 90 Mbytes/sec.  The file system is ext3 on a Multidisk RAID5 array
>> that consists of 4x1TB Seagate SATA drives.
>>
>> When transferring lots of small files the rate drops dramatically.  When
>> I run rsync over the same link between the same systems the transfer
>> rate for large files hovers around 55 Mbytes/sec and drops to as low as
>> 1.5 Mbytes/sec when it hits directories with lots of small files (<100
>> Kbytes).
>>
>>> I am also working on a COTS storage solution using lustre and Samba
>>> CTDB.
>>> The aim is to provide 1000MB/s to clients (100MB/s to each client) so
>>> they
>> Have been working on two Samba-cTDB cluster installations. One of these
>> is based on RHEL and has Samba-cTDB on a front-end cluster that sits on
>> top of RHCS, over a GFS2 file system, over LVM, over iSCSI.  The
>> back-end consists of two systems that each have 32TB of data that is
>> mirrored using DRBD.  The DRBD nodes are exported as iSCSI targets.
>>
>> So far with 2 active front-end nodes (each has 8 CPU cores) and running
>> the NetBench workload using smbtorture, the highest peak I/O I have seen
>> is 58 MBytes/sec.  The iSCSI framework is using bonded multiple 1
>> Gigabit ethernet adaptors and the cluster front-end also uses multiple 1
>> Gigabit ethernet.
>>
>> I would love to find a way to get some more speed out of the cluster,
>> and hence if you can meet your 100 Mbtes/sec objective I'd love to know
>> how you did that!
>>
>> PS: Using Samba-3.3.1 with CTDB 1.0.70.
>>
>>> can edit video online. The solution also needs to be scalable in terms
>>> of
>>> IO and storage capacity and built out of open source components and COTS
>>> so there is no vendor lock in. Initial tests on lustre using standard
>>> dell
>>> desktop hw are very good. However I need samba ctdb to communicate with
>> The moment you introduce global file locking I believe you will see a
>> sharp decline in throughput.  Clusters are good for availability and
>> reliability, but throughput is a bit elusive.
>>
>>> the clients as they are Apple macs. I havent reached samba ctdb
>>> configuration yet. But the gigabit ethernet issue had me scared till I
>>> received your reply. Now I see a lot of hope ;-).
>>>
>>> Once again thanks for your help and do let me know if I can reciprocate
>>> your kindness.
>>>
>>> Cheers,
>>> Godwin Monis
>>>
>>>
>>>>> Now, if its not asking for too much, can you let me know
>>>>> 1. the network chipsets used on your server and client
>>>>>
>>>> Main servers
>>>> fileserv ~ # lspci | grep Giga
>>>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>>> Gigabit Ethernet (rev 03)
>>>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>>> Gigabit Ethernet (rev 03)
>> NICs on the cluster servers I am working with are:
>>
>> nVidia Corporation MCP55 Ethernet - dual ports on mobos
>> Intel Corporation 82571EB Gigabit Ethernet Controller - quad port
>>
>>>> dev6 ~ # lspci | grep Giga
>>>> 02:09.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>>> Gigabit Ethernet (rev 03)
>>>> 02:09.1 Ethernet controller: Broadcom Corporation NetXtreme BCM5704
>>>> Gigabit Ethernet (rev 03)
>>>>
>>>> datastore0 ~ # lspci | grep net
>>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>>
>>>> datastore1 ~ # lspci | grep net
>>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>>
>>>> datastore2 ~ # lspci | grep net
>>>> 00:08.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>>>> 00:09.0 Bridge: nVidia Corporation MCP55 Ethernet (rev a2)
>> I hope this is useful info for you. Let me know if I can assist you in
>> any way.
>>
>> Cheers,
>> John T.
>>
>>
> 
> 



More information about the samba mailing list