Samba Speed Question - Please Help!

Mike Brodbelt m.brodbelt at acu.ac.uk
Wed Oct 4 14:41:35 GMT 2000


Rob Tanner wrote:
> 
> Poor UNIX performance with large directories (e.g., 10,000 files) is
> principally due to the overhead of searching the directory space.  In this
> case it would appear that the copy request was based on a wildcard which
> meant the directory had to be searched.  But a second factor in this case
> is the time it takes to set up all the directory pointers (I don't know
> from what was said below with files were being written to read from the
> SAMBA server).  You will also find that on must UNIX systems a tar archive
> containing a single 1Gb file can be untarred in just a few seconds,
> basically at about the transfer rate of the disk.  However, with a tar
> archive of 10,000 10Kb files, you might as well take a lunch break.  I
> would guess that the time the system takes to create 10,000 file pointers
> and allocate space (one file at a time) is going to exceed the cost of
> wildcard searching (as in "copy *")

The original poster was using an SGI I believe. It's perhaps worth
asking what the filesystem is - SGI's XFS is supposed to be good at
handling these things without imposing a huge overhead. There's also
ReiserFS available, which is a filesystem specifically optimised for
serving large numbers of small files. Although not (AFAIK) available for
IRIX, the original poster might find that a Linux machine running Samba
on a ReiserFS filesystem would give better performance that the powerful
SGI machine quoted.

HTH,

Mike




More information about the samba mailing list