[Samba] Samba async performance - bottleneck or bug?

douxevip douxevip at protonmail.com
Wed Jul 24 16:07:30 UTC 2019


Hi David, have you had the chance to read the previous email? See below. Thanks for the help.

-------- Original Message --------
On Jul 20, 2019, 01:26, douxevip via samba wrote:

> Hi David,
>
> Thanks for your reply.
>
>> Hmm, so this "async" (sync=disabled?) ZFS tunable means that it
>> completely ignores O_SYNC and O_DIRECT and runs the entire workload in
>> RAM? I know nothing about ZFS, but that sounds like a mighty dangerous
>> setting for production deployments.
>
> Yes, you are correct - sync writes will flush to RAM, just like async, will stay in RAM for 5 seconds or less and then get flushed to disk. This means that all writes are drastically sped up.
> For big production deployments this is indeed risky, and naturally I wouldn't recommend it to most organizations. For now, I want like to test to see how far we can push Samba speeds and what is achievable with small random writes. And for our local setup with a few users, copious backups and a well-functioning UPS system, our risks aren't as high as they could be for others. ZFS checksumming helps against OS and data corruption so in our workload, we'd lose about 10 to 20 seconds of work in the worst case scenario. Regardless, I'm interested to see how fast Samba can theoretically perform.
>
> I've tested the Samba share with "strict sync = not" like you said, but it didn't have an effect on the ZFS dataset with async only, as that writes to RAM anyway. For the ZFS dataset that honors sync requests I can indeed see that "strict sync = not" doesn't honor the sync request, similarly to ZFS.
>
> So to summarize, this is the situation:
>
> 1) I run a fio benchmark requesting, small, random, async writes. Command is "fio --direct=1 --sync=0 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --size=32k --time_based". I run this command on both the host, as on the Samba client, both on the same exact ZFS dataset
>
> 2) The ZFS dataset only writes async, converting sync to async writes at all times
>
> 3) That same dataset being shared through Samba, also only performs async writes (strict sync = no)
>
> 4) With these settings, on the host the benchmarks tops out at 520MB/s. On the Samba client (again, writing to the same end destination) I get 40 MB/s tops. I feel like those benchmark scores should be much closer given the above test.
>
> I've ruled out the network being an issue since a) they're hosted on the same host via a Linux bridge (which means it never leaves the box) and b)an iperf3 test with a single thread completed the benchmark at 44Gbit/s.
>
> I guess that leaves one question: where is the bottleneck here, and is there perhaps another method to speed up these small writes for the sake of experimentation?
>
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Friday, July 19, 2019 2:00 PM, David Disseldorp <ddiss at samba.org> wrote:
>
>> Hi,
>>
>> On Thu, 18 Jul 2019 19:04:47 +0000, douxevip via samba wrote:
>>
>> > Hi,
>> > I have a ZFS dataset that has sync writes disabled (setting sync=disabled) which means that it will only do async writes, and sync requests get converted to async writes. The ZFS dataset is hosted on a single Samsung 840 Pro 512GB SATA SSD.
>> > I have this same dataset served as a Samba share, using Proxmox VE 6. Samba version 4.9.5-Debian (Buster), protocol SMB3_11. Kernel version 5.0.15.
>> > To illustrate, when I do a random sync write benchmark on the host on this dataset, it will use RAM to do the write, drastically speeding up random writes.
>> > The below benchmark command on the ZFS host:
>> > fio --direct=1 --sync=1 --rw=randwrite --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --group_reporting --name=sambatest --time_based
>> > Has an average speed of 520MB/s (which is the maximum speed of my SATA SSD). Despite requesting a sync write, ZFS turns it in an async write, dramatically speeding it up. Clearly the results are great when I directly benchmark from the host into the sync=disabled ZFS dataset. But this doesn't translate to real-world Samba speeds as I would've hoped. My goal is to get Samba as fast as possible with random writes like these in both my Windows 10 Professional PC (1903) as well as a VM on the same Proxmox host, running a clean Debian 10 Buster install.
>>
>> Hmm, so this "async" (sync=disabled?) ZFS tunable means that it
>> completely ignores O_SYNC and O_DIRECT and runs the entire workload in
>> RAM? I know nothing about ZFS, but that sounds like a mighty dangerous
>> setting for production deployments.
>>
>> > However, when doing the exact same speedtest command listed above from the Debian 10 VM that has that exact same CIFS share mounted, I only get 35MB/s max speeds. On my Windows 10 machine, using a 700MB folder filled with thousands of files between 16 - 64KBs, I get worse speeds even - hovering around 15MB/s and completing the task in about 60 seconds. To clarify - these are all hosted on the same ZFS dataset, which transforms sync writes into async writes.
>> > My question is 1) how can I get Samba to write these to RAM (async) for much faster speeds? I thought that setting it up in the ZFS dataset should take care of that, but it does not seem to make any difference.
>>
>> If you want to test SMB client buffered I/O, then I'd suggest that you
>> enable oplocks/leases (should be on by default), and then run fio
>> without the sync / direct parameters. If you want Samba to ignore SMB
>> client sync requests, then you could set "strict sync = no", but like
>> the sync=disabled ZFS tunable you mentioned, this parameter plays
>> russian roulette with your data if the server goes down unexpectedly.
>>
>> > 2. Where is the bottleneck exactly?
>> >
>> > I currently have this setup in my smb.conf, just listing the lines I edited (the rest is default):
>> > [global]
>> > netbios name = prox
>> > case sensitive = no
>> > server min protocol = SMB3
>> > client min protocol = SMB3
>> > [Media]
>> > path = /zfs/synctest
>> > valid users = myuser
>> > read only = no
>> > writeable = yes
>> > guest ok = no
>> > create mode = 770
>> > directory mode = 770
>> > syncops:disable = true # I tried the test with both this option enabled as well as commented out, and it didn't seem to make a difference.
>>
>> syncops parameters only have an effect if the syncops VFS module is
>> enabled.
>>
>> Cheers, David
>
> --
> To unsubscribe from this list go to the following URL and read the
> instructions: https://lists.samba.org/mailman/options/samba


More information about the samba mailing list