Bad NT performance on large Samba directories (PR#19452)

Jeremy Allison jallison at
Wed Aug 4 17:21:17 GMT 1999

mc.broersma at wrote:

> So, a simple test case is selecting all 20.000 files and performing a
> delete action using the NT client.
> The maximum speed is about 3 files/sec and the Samba server is using all
> the CPU time for this (only) action from one NT-client. It will take you
> hours to complete this command.
> I tried the Samba lock options and tuning parameters, but couldn't get a
> result. So, I put all options back to their default values.
> I used the Solaris 'truss' command, to view what system calls the server
> is doing.
> All these are mostly readdir's and stat call's and that's what slows
> down the Samba server extremely, using 100% of its CPU time.
> I also watched the network activity using 'snoop' during the delete
> action of the NT-client and what I discovered was, that all the
> information the Samba server is gattering with all these system calls,
> is not going to the NT-client all the time.
> Who needs all this information and why does the Samba server shows this
> behavior????

Unfortunately the Windows client is asking for this
information, and the Samba server is trying to follow
the requests from the client (as it is supposed to do).

Right now Samba is very slow with a large UNIX directory.
It's something we've known about for a while but is somewhat
difficult to fix.

My recommendation is to try and reduce the directory size
from 20,000 files (which does seem rather large). I believe
Linux ext2fs will also be quite slow on a directory this size
(Linux experts please correct me if I'm wrong on this).

I'm looking at doing directory content caching in Samba, but it's
somewhat fiddly to get right...


	Jeremy Allison,
	Samba Team.

Buying an operating system without source is like buying
a self-assembly Space Shuttle with no instructions.

More information about the samba mailing list