[Samba] max concurrent CIFS connections

James Peach jorgar at gmail.com
Wed Sep 7 23:33:19 GMT 2005


On 9/8/05, John H Terpstra <jht at samba.org> wrote:
> On Wednesday 07 September 2005 16:18, Gerald (Jerry) Carter wrote:
> > Jeremy Allison wrote:
> > | On Wed, Sep 07, 2005 at 09:22:31PM +0200, Pseudomizer wrote:
> > |>I need some help please. I have been told from an administrator that
> > |> Samba does only support up to 3.000 concurrent CIFS connections and each
> > |> connections reserve 5MB of memory.
> > |
> > | This is incorrect and way too high. It depends on how active each
> > | connection is. On the HP PSA (Print Server Appliance) I believe it's
> > | a much smaller number (I'm sure Jerry can give more accurate figures).
> > | Of course that's for printing only. Depends what the users are doing.
> >
> > My rule of thumb is 2Mb per smbd.  With that man connections
> > I'd go for a maybe an 8 way SMP box?  But to be honest, I've
> > never done a server that lareg in production myself.  And I
> > don't sysadmin anymore as a general rule.
> 
> Samba does not scale linearly without bounds. Nothing does!
> 
> The real bottle-neck needs to be identified.
> 
> In a site that has heavy network usage, 30-40 concurrent users (all writing to
> disk at the same time) can choke up a server in an amazing way. In this
> situation, adding memory and/or CPUs achieves very little.
> 
> Look at it this way:
> 
> If the disk I/O architecture is provided by a controller that permits a
> sustained write rate of 300 Mbytes/sec (an amazingly fast RAID controller
> that can nearly saturate a whole PCI-X 133MHz 64-bit bus), the I/O limit is
> easily reached with 3 gigabit network cards that are appropriately
> configured. Anything more than 4 CPUs will hardly help over-all performance.
> From past benchmarking work, as well as from practical field experience, the
> benefit of adding memory under such load conditions in excess of 4GB is
> marginal to say the least.
> 
> On the other hand, if over-all network traffic is light, adding memory and
> CPU's will make all clients more responsive. But adding memory and/or CPUs
> does nothing to eliminate a disk I/O limitation.
> 
> My rule of thumb is 3.5 MBytes per concurrently writing Windows client, plus
> about 2 MB per additional client. This means that active clients will be
> served from physical memoery and passive clients will most likely be in swap.

Let's assume 7000 users, say each user has their own smbd daemon (ie.
no connection multiplexing) and each smbd has 2MiB RSS. That's ~13GiB
(worst case) for starters, but let's say that only 1000 users have a
connection at any one time, which gives us more like 2GiB for smbd
process memory requirements.

Let's say that each user has a working set of 10MiB and that they need
512MiB/s of bandwidth (on average). So your bandwith requirement is
~500 MiB/sec, so you need to be driving 5 gig-e NICs at line rate,
which will cost you about 5 CPUs (depending on a lot of stuff). The
I/O rate you need from your storage will depend on how much RAM you
stick in the server. With the assumptions above, if you stick 15 - 20
GiB of RAM in and the working set is pretty stable, you'll hardly ever
need to do IO. Otherwise, you'll need at least enough I/O capacity to
drive the network (~500MiB/s).

-- 
James Peach | jorgar at gmail.com


More information about the samba mailing list