[Samba] samba w/vfs notify_fam perf hit of 25% on writes noticed...?

Linda W samba at tlinx.org
Mon Feb 17 04:15:34 MST 2014


Didn't mean to imply it did if that was how it was taken.

If famd is supposed to monitor changes, it's possible that it gets
notified with every change.  Given that.
My client it doing 8M writes (to a total of 4G xfer/test).

What I'm wondering is how many io calls that would translate to
on the server.  Are large io's broken down into smaller
calls on the server? 

Example -- if server wrote out a 4K block as soon as it got
4K in, that'd translate to 1million reads in about 10s,
which seems like even that might not do it.

In both cases, though, they are *char* devices... I wonder
if something is causing a "change" notice to go out to
'famd' (via the kernel) with each 'char' written?

Now that could be ugly, but nothing that the average
case would need to worry about.  (Who writes to char
devs for high i/o speeds?).  I just used /dev/null and
/dev/zero as devices I thought would use the least amount
of cpu to allow isolating samba from the effects of actually
reading/writing to a disk.

Volker Lendecke wrote:
> Hi!
>
> smbd should not call famd at all for the write code path.
>
> It might be that due to an open notify (open explorer
> window on that directory?) famd must send many change
> notifies to smbd and then to the client, but the write code
> path itself does not touch famd at all.
>
> With best regards,
>
> Volker Lendecke
>
> On Thu, Feb 13, 2014 at 01:18:24PM -0800, Linda W wrote:
>   
>> I was running some benchmarks and trying to tune speed between
>> a samba 3.6.22 server and win7.
>>
>> My primary benchmark is using 'dd' on windows to read/write to
>> device files in my home directory to eliminate effects of disk latency.
>> So for reads, I transfer from h:/zero and for writes I write to h:/null.
>> Where h: is my unix home dir.  (for the other end of the transfer,
>> I use /dev/null and /dev/zero, respectively, under cygwin).
>>
>> I don't remember seeing this during previous benchmarks, which is
>> why I was more than a little curious.
>>
>> When I'm doing the client write's
>> (writing 4G to the server, I used to see 98-99% cpu
>> usage from smbd that was servicing my session.
>>
>> Now, I'm seeing about that amount from famd, and only in the upper
>> 80%'s for smbd.
>>
>> Transfer wise, I'm losing 25% on writes dropping them down to
>> around the same speed, or slightly less than 'reads' (writes
>> have normally been higher, cuz a writer with a large TCP
>> window can get ahead of where the reader is, but a reader
>> can never be ahead of what the writer has sent...).
>>
>> So why is famd being peg'ed.  My default write size is 8M,
>> count of 512.  Even if samba called famd once/write, that'd
>> only be 512 times/second which should be negligible, but
>> it's acting more like it is getting called w/each packet?
>> Even that shouldn't be horrible, as I use a 9K packet size
>> to cut packet overhead by 5/6ths...
>>
>> So wondering if anyone else has seen such or had experience
>> with famd -- especially in the most recent 3.6.22 series?
>>
>> Thanks...
>>
>>
>>
>> -- 
>> To unsubscribe from this list go to the following URL and read the
>> instructions:  https://lists.samba.org/mailman/options/samba
>>     
>
>   



More information about the samba mailing list