Samba Speed Question - Please Help!

Justen Marshall justen at al.com.au
Wed Oct 4 03:50:30 GMT 2000


Hi

I'm currently experiencing an excessive amount of CPU usage by my
smbd daemons, and was wondering what I could do to reduce it.

The gear I'm using is...

  - Server is a dual CPU SGI Octane with 1Gb of memory, and no other
    processes running apart from Samba.

  - The disk space being served is a 300Gb Ciprico RAID volume with a
    truly fast access speed.

  - The server has a Gigabit ethernet connection to our switch.

  - Samba 2.0.7 (SGI Freeware pre-compiled and packaged binary).

In my opinion, that should be plenty of resources to serve fifteen or
so NT workstations (that's about what we are using). However, during
even medium usage periods, there can be a dozen or more individual
smbd processes running, each consuming up to 70% CPU (if they manage
to clam themselves to the top of the pile they will take that much,
though generally competition between themselves limits them to 20% or
so... with the server CPU use at 100% full-time).

After many days of playing with this problem I have a few pieces of
information that lead me to believe that Samba (as it is configured on
my server at the moment) is doing too much work. I will not be at all
surprised to find that I have overlooked some configuration flag, but
I am at a loss just now about what to try next!

  - I have experimented with "max xmit" and discovered that a value of
    around 8k gives the best CPU performance for a given file copy,
    but it's still fairly high utilisation.

  - Experimented with locks and oplocks and level 2 oplocks, but found
    that the default values were about the best (and that's what we
    were using when the problem was first noticed).

  - Experimented with file transfers and discovered something
    interesting... the copy of a 1Gb file proceeded at around 7.5Mb
    per second, which is in line with the 9Mbps we achieved with FTP
    (which is always expected to be a bit faster than a more complex
    protocol). However, when copying 10,000 files of 10k each (only
    100Mb) it took AGES, and CPU use went towards 100% for the smbd
    thread handling the requests.

  - Unfortunately, our typical usage of the Samba file system is more
    accurately modeled by the test using thousands of small files... a
    general render for a small chunk of one of our 3D scenes may use
    1000 to 2000 small scene description files (which are around 4kb
    each), plus 400 to 1000 texture image files (averaging 5Mb, though
    some are MUCH bigger). There are also many intermediate stages
    that get written back to the Samba volume in order to be re-used
    by other processes, so it's not just one-way traffic. And of
    course, our rendered images go back there too... they can be up to
    fifteen Mb each.

  - Accessing a file which resides in a directory that has 10,000
    other files in it appears to be much slower and much more CPU
    intensive than performing the same operation on a file in a nearly
    empty directory.

  - We have used "par" to analyse the system calls made by the Samba
    daemon during heavy loads, and it seems to be doing a lot of
    excess directory reading. For the technically-minded, ngetdents()
    is called LOTS of times. Although it is rather expensive to call,
    there doesn't seem to be much caching of the results... is there a
    way to cache directory contents? I have activated the "getwd
    cache" option but that only seems to cache the tree walk, not the
    dir contents themselves.

If your opinion is that our usage patterns are aggravating the
problem, I would agree with you! However, I really NEED to use Samba
here as part of this setup. It is the only thing that will bridge the
gap seamlessly, and apart from slowdown caused by over-use of server
CPUs, it is performing flawlessly (ie, we have all the access to the
files we need, in a straight-forward and natural manner).

We are using a heterogeneous network comprised of SGI and NT
workstations and Samba is the only solution we have found that can fit
into our production pipeline.

I have tried an NFS based system (my current Samba setup replaces our
old system, which used Maestro). Although the NFS system was not
flexible enough for our needs, even at peak use it did not use
anywhere near the amount of CPU that my Samba daemons are currently
using.

We have used Samba in the past for other projects, and it worked fine,
but the user software was different and made use of fewer, larger
files. Testing our old server under our new conditions yielded the
same result... over-use of CPU. I didn't notice the differences in our
usage patterns before I committed to using it for this project... but
now I'm neck deep and I really need help!

Yes, I have read the Samba Performance Tuning document and have obeyed
all those recommendations.

If you could please offer me any advice on how to tune Samba for high
frequency, small size file access,

Justen
-- 
.-----------------------.------------.--------------------------.
| Justen Marshall       |            | e-mail: justen at al.com.au |
| Technical Director    |            | phone:   +61-2-9383-4831 |
| Animal Logic Pty Ltd  |            | fax:     +61-2-9383-4801 |
|-----------------------^------------^--------------------------|
| Athiesm is a non-prophet organization.                        |
`---------------------------------------------------------------'




More information about the samba mailing list