[Samba] samba performance when writing lots of small files

Jeremy Allison jra at samba.org
Fri Nov 8 19:17:21 UTC 2019


On Thu, Nov 07, 2019 at 01:17:03PM +0100, thoralf schulze via samba wrote:
> hi jeremy / all,
> 
> On 11/6/19 10:39 PM, Jeremy Allison wrote:
> > This is re-exporting via ceph whilst creating 1000 files,
> > yes ? What timings do you get when doing this via Samba
> > onto a local ext4/xfs/btrfs/zfs filesystem ?
> 
> yes, creating 10k small files. doing the same on a local ssd, formatted
> with an ext4 fs without any special options:
> 
> root at plattentest:/mnt-ssd/os# time for s in $(seq 0 9999); do echo $s >
> test-$s; done
> 
> real	0m0.376s
> user	0m0.130s
> sys	0m0.246s
> root at plattentest:/mnt-ssd/os#
> 
> and on the very same ssd, exported via cifs and re-mounted locally,
> samba and share config are identical to the ceph test:
> 
> root at plattentest:/mnt-ssd-cifs/cifs# time for s in $(seq 0 9999); do
> echo $s > test-$s; done
> 
> real	0m43.228s
> user	0m0.445s
> sys	0m2.692s
> root at plattentest:/mnt-ssd-cifs/cifs#
> 
> that's more in line with what one would expect :-)
> do you have any thoughts on why the same test with the ceph share is
> more than an order of magnitude slower? afaik the ceph client should
> block until all data has been written, no matter if it is being fed by
> echo or samba … the cephfs is mounted via the kernel driver, if that
> matters.

I'm not sure - you're going to have to look into
how the ceph kernel client deals with the syscall
requests smbd is making.

Have you tried using the vfs_ceph module in smbd
to avoid the extra trip into the kernel ?



More information about the samba mailing list