[Samba] 1MB/s gigabit transfers on dell poweredge

godwin at acp-online.co.uk godwin at acp-online.co.uk
Sat Mar 14 17:30:33 GMT 2009


Hi Volker,John, folks

I upgraded the smb binaries to 3.0.24 and reduced the log level and my
samba servers now giving throughputs of 32-38 mbps with some spikes beyond
60mbps .

That's way below what John's getting. Anyway, I can at least finish off
this installation as my samba server can match or exceed the clients
benchmark against his existing MS windows server.

Since I am using 3 X 1 terabyte disks in a hardware raid 5 array disk I/O
should not be an issue. I expect the issue to be too may patch points (the
joy of structured cabling) or not too good switches.

I will be looking at optimisations to further improve throughput.

Volker's 700Mbytes/second on 10Gbe sounds good and could be considered for
the client interconnects for lustre. I've got to work out the cost of
cabling  and switches for 10g. That could be a deterrent. Else, putting in
1 10g nic should be equal to putting in three or four 4 port GBe Nics.

John, I will keep you posted on my work on Lustre. I expect to start work
on it in about 10 days. We could share some info on optimising CTDB to
take advantage of the high I/O provied by Lustre. SAN using opensource and
COTS seems a very interesting proposition if the client interaction can be
done via smb protocol, especially if the throughputs are scalable.

I am also interested in PNFS (NFSV4) but again its available only on
solaris. Plus windows clients dont support nfs out of the box so it cant
be considered for most of the potential clients

Cheers,

Godwin


> On Fri, Mar 13, 2009 at 08:53:11PM -0500, John H Terpstra - Samba Team
> wrote:
>> My home network running Samba-3.3.1 over 1 Gigabit ethernet transfers up
>> to 90 Mbytes/sec.  The file system is ext3 on a Multidisk RAID5 array
>> that consists of 4x1TB Seagate SATA drives.
>
> Hey, bragging time :-)
>
> Using one 3.2 smbclient against smbd reading off RAM disk
> I've seen more than 700MBytes/second over 10GigE where the
> raw TCP speed was not much more than that.
>
> With real client apps reading consecutive small files using
> the just-added preopen module the performance was more than
> 300MBytes/second. But here we had to hand-tune the server
> using the preopen module to completely hide the file system
> latencies.
> > Executive summary: smbd is no real bottleneck, it's client
> behaviour and the rest of the infrastructure that limits
> you. Performance tuning to those numbers is not really easy
> though, you really have to look at all components on the
> path from the Win32 app down to the raw rotating disk.
>
> Volker
>




More information about the samba mailing list