fcntl spinlock in Linux?
J. Bruce Fields
bfields at fieldses.org
Thu Jan 31 06:12:10 MST 2013
On Thu, Jan 31, 2013 at 12:51:09PM +0100, Volker Lendecke wrote:
> On Wed, Jan 30, 2013 at 01:40:25PM -0500, J. Bruce Fields wrote:
> > Eh, I can't tear myself away: here's one thing to try if you've an easy
> > way to test it.
> >
> > The following removes the global lock lists and the code that depends on
> > them (deadlock detection and /proc/locks), which might break some
> > programs. Then it removes the global spinlock and replaces it by the
> > inode i_lock. Only lightly tested.
> >
> > Not acceptable for upstream, but it should tell us the most we could
> > gain by breaking up the global lock and fixing deadlock detection.
> >
> > Generated against something 3.8-rc2ish, but shouldn't be hard to apply
> > to older kernels.
>
> Thanks! I'm coordinating wiht my customer now how to test
> this.
That would be great. But, again, to make sure it's clear: this is
intended for testing, not production.
> If I'm reading this right, it replaces the one central
> spinlock with per-inode spinlocks. This will help us scale
> further, because we can spread the load. But we will still
> have the problem that if we heavily contend on a single
> fcntl entry, we will see the thundering herd, right?
Right. This patch does also strip out the deadlock detection, which
looks extremely inefficient, and is performed every time someone goes to
sleep on a lock. Though of course it's hard to predict what effect that
has in practice.
> I'm not
> sure this is solvable at all but with some RPC service that
> queues things.
Well, there's more work to be done on the thundering herd too: currently
we wake up every waiter, only to typically have all but one of them
retry the lock and go right back to sleep.
I have patches that fix that (so normally only one waiter will be
woken), but they're more complicated and haven't been looked at in a few
years. I can try to revive them.
--b.
More information about the samba-technical
mailing list