[Samba] Re: CTDB + Samba: Tune Read Performance
tim.clusters at gmail.com
Fri Jan 30 22:46:17 GMT 2009
On Fri, Jan 30, 2009 at 2:52 PM, Volker Lendecke
<Volker.Lendecke at sernet.de>wrote:
> On Fri, Jan 30, 2009 at 02:34:27PM -0700, tim clusters wrote:
> > Currently, a SMB server is able to handle sustained 300MB/s on writes and
> > 200MB/s on reads. Performance remains constant as you scale clients with
> > time-outs and performance scales as you add another server. Iam still not
> > sure if we can extract more from SMBD as CPU/memory/IO subsystem is less
> > than 30% saturated. Seems like the performance bottleneck is
> > + SMB packet-size as raw network yields 450MB/s for 64KB packet-size.
> Not having followed what you already tried, but I can assure
> you that smbd is not the bottleneck for the raw transfer
> tests. Just this week I was at a customer with 10GigE.
> Tested a get operation with smbclient from master. First run
> 120MB/sec. Increased window size, got around 300MB/sec.
> Activated jumbo frames, got around 600MB/sec. To get this,
> we had to make sure the file was already in RAM. It seemed
> that above 450MB/sec the file system (ZFS on top of some SAN
> with 192 disks in that case) started to be the bottleneck.
> With pure netcat we got a difference of less than 5%,
> definitely below the normal variation.
> I'm stressing the use of latest smbclient a bit, because
> this should really squeeze what you cat get out of your
> hardware, it completely the network latencies.
I shall try the latest smbclient.
By the way, Jumbo Frame is enabled on the 10GigE HCA and raw network
bandwidth peaks at 850MB/s. From the underlying SAN and GPFS file-system,
we get around 1400MB/s aggregate. Single stream bandwidth using native
file-system client (GPFS) with 1MB block-size/packet-size delivers 800MB/s.
I shall play with network proc settings and post if I come up with further
More information about the samba