[Samba] Re: High load average and client timeouts

Dragan Krnic dkrnic at lycos.com
Wed Jan 14 14:15:46 GMT 2004

> I am setting up a proof-of-concept backup server at my 
> office.  The end idea is for a dozen or so of our ~200 
> workstations to dump images (like PowerQuest 
> DeployCenter, not JPEG) to a 2Tb RAID5 at reasonable
> speeds.

Your backup program is a bit less general than a tar
which I use, but perhaps you can make some analogy with
my comments below and apply it to your case. Basically
I think your problem is that continuous writing to an 
smb-share is rather fragile. If your backup problem 
allows you to output data to stdout, then you might 
attach it to an rsh or rexec filter with buffering 
software on the Linux side. Read my comment.

> One nagging question is what would the "real" server's 
> performance be?  We have spec'd dual Athlon MP 2200+ 
> CPUs, a 3ware 7506-12 controller with 12  200gb 
> Western Digital drives, and 4gb of RAM. (Whole thing 
> is $6,000!!)  Thing is, I don't think the RAID would 
> be much faster (writing) than the existing IDE drive.  
> I'd hate to blow six grand and find out it doesn't 
> perform any better.

I can speculate what a "real" server would do, but 
I've been doing something like that for a long time 
with a similar workstation, SuSE 8.2, P4/3G, 2GB RAM, 
480 GB 4-way IDE stripe and never bothered to look at 
load numbers because it works so smoothly. 25 admin 
shares are being backed up simultaneously every 
workday but without affecting interactivity of
remote sessions. The built-in Gbit NIC is using 
up all 100 Mbps that the switch passes on to it 
plus about 20 MB/s from a samba PDC via a Gbit 
link, so there is an aggregate max speed of about 
32 MB/s. Never any aborts.

The trick is probably in the little buffering filter
(xt) between the backup tool and the disk. This is 
more efficient both because the reading part accepts 
incoming data without delay and because the writing 
part only writes data to disk once a high mark is 
reached so when it starts writing it flushes data
in one big chunk, which reduces fragmentation.

The downside is that I'm using 32 MB RAM per backup
session, so you need more memory. The buffer size 
is settable to a multiple of 64 KB between 10 and 
(SHMMAX/64KB - 3). 512 works fine for me but less 
would probably work decently too.

I use tar as backup tool. All shares are smbmount'd
under /mnt so backing the data up is basically

 for share in $(</etc/bkp-shares)
 do  cd /mnt/$share
     ( tar cbf 64 - . | xt -n512 > /tars/$share ) &

Well, there's a little more for logging (2>/logs/$share)
and incrementation (find . -mtime -o -ctime | tar -T -...)
but I didn't want to clutter the simple example. 

The filter xt has optional arguments 
-i infile, -o outfile, -s KBchunk, -n numchunks, 
-t sleeptime. Defaults are stdin, stdout, 64 KB, 10, 1. 

I also use it to transfer backups to tape. It can read 
from the stripe at about 130 MB/s and the tape can 
accept about 80 MB/s, if no other I/O takes place, but 
combining the two reduces the speed to about 35 MB/s so
that on average only about 50 MB/s are obtained. A "real" 
server not limited to 32-bit/33MHz PCI could probably 
do a little better.

> System specs:
>   Linux 2.4.22 (custom)
>   Slackware 9.1
>   Samba 3.0.1
>   2.2Ghz Intel Celeron
>   60gb Maxtor 6Y060L0 on UltraATA/133
>   128mb RAM, 256mb swap
>     # Will try to add RAM next week
>   On-board Intel Pro/1000 (Gigabit) NIC

More information about the samba mailing list