[Samba] Re: High load average and client timeouts

Daniel Johnson Progman2000 at usa.net
Thu Jan 15 15:51:19 GMT 2004


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 14 Jan 2004 at 15:15, Dragan Krnic wrote:
> Basically
> I think your problem is that continuous writing to an 
> smb-share is rather fragile. If your backup problem 
> allows you to output data to stdout, then you might 
> attach it to an rsh or rexec filter with buffering 
> software on the Linux side. Read my comment.

It's actually a fake-Windows-app running in DOS.  Given its features
(and the mentality of my bosses) its not going to be dropped.  :-/
 
> I can speculate what a "real" server would do, but 
> I've been doing something like that for a long time 
> with a similar workstation, SuSE 8.2, P4/3G, 2GB RAM, 
> 480 GB 4-way IDE stripe and never bothered to look at 
> load numbers because it works so smoothly. 25 admin 
> shares are being backed up simultaneously every 
> workday but without affecting interactivity of
> remote sessions. The built-in Gbit NIC is using 
> up all 100 Mbps that the switch passes on to it 
> plus about 20 MB/s from a samba PDC via a Gbit 
> link, so there is an aggregate max speed of about 
> 32 MB/s. Never any aborts.

We've given thought to putting it on a 10/100 switch instead of the
current 100/1000, but with an NT5 system performing well on gig, we
wouldn't be able to justify using it.
 
> The trick is probably in the little buffering filter
> (xt) between the backup tool and the disk. This is 
> more efficient both because the reading part accepts 
> incoming data without delay and because the writing 
> part only writes data to disk once a high mark is 
> reached so when it starts writing it flushes data
> in one big chunk, which reduces fragmentation.

I would've thought that Samba and/or the kernel would implement a
similar buffer already.  Any gurus care to shed some light on this? 
Perhaps a relative lack of RAM is keeping the buffers from
functioning properly?

> The downside is that I'm using 32 MB RAM per backup
> session, so you need more memory. The buffer size 
> is settable to a multiple of 64 KB between 10 and 
> (SHMMAX/64KB - 3). 512 works fine for me but less 
> would probably work decently too.

With 4gb specd, 32mb each is no problem.  Heck, I could still get
half the office at once with RAM to spare.

> I use tar as backup tool. All shares are smbmount'd
> under /mnt so backing the data up is basically
>
>  for share in $(</etc/bkp-shares)
>  do  cd /mnt/$share
>      ( tar cbf 64 - . | xt -n512 > /tars/$share ) &
>  done

...so the server is pulling from the clients, rather than the clients
pushing to the server.  Makes sense.

> I also use it to transfer backups to tape. It can read 
> from the stripe at about 130 MB/s and the tape can 
> accept about 80 MB/s, if no other I/O takes place, but 
> combining the two reduces the speed to about 35 MB/s so
> that on average only about 50 MB/s are obtained. A "real" 
> server not limited to 32-bit/33MHz PCI could probably 
> do a little better.

The Tyan mobo we want has an onboard Adaptec Ultra320 SCSI
controller.  Combined with a 64-bit 3ware RAID controller, I don't
think we'll have much I/O bottleneck there.

I wish I could use your method for our backups, but I'd start a riot
if I suggesting moving away from DeployCenter.  It's handling of
varying partition sizes, boot sectors, and such has saved our hides
on more occasions than I care to recall.

- -- 
Through the modem, off the server, over the T1, past the frame-relay,
< < NOTHIN' BUT NET > >
 
Daniel Johnson
Progman2000 at usa.net
http://dannyj.come.to/
Public PGP Keys & other info: http://dannyj.come.to/pgp/


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (MingW32) - GPGshell v2.95

iD8DBQFABrbi6vGcUBY+ge8RAqFAAJ45VAPut0YhR64AZRp+0lMWbrJ0lQCghgrx
XlMePwcYtUhH3/B2q7FdZnE=
=IQR+
-----END PGP SIGNATURE-----



More information about the samba mailing list