[linux-cifs-client] [PATCH 0/4] cifs: alternate approach to fixing oplock queue races

Jeff Layton jlayton at redhat.com
Wed Sep 9 09:49:04 MDT 2009


On Tue, 8 Sep 2009 13:28:58 -0400
Jeff Layton <jlayton at redhat.com> wrote:

> On Tue, 8 Sep 2009 12:12:22 -0500
> Steve French <smfrench at gmail.com> wrote:
> 
> > On Tue, Sep 8, 2009 at 9:12 AM, Jeff Layton<jlayton at redhat.com> wrote:
> > > This patchset is an alternate approach to fixing the oplock queue
> > > problems. Rather than tracking oplocks with separate structures, this
> > > patchset adds some fields to cifsFileInfo.
> > >
> > > When an oplock break comes in, is_valid_oplock_break takes an extra
> > > reference to the cifsFileInfo and queues the oplock break job to the
> > > events workqueue.
> > 
> > This looks excellent - the 4th patch has to be tried carefully but
> > easier to read than alternatives.   Any idea when the schedule_work
> > would get dispatched (is the current task put to sleep immediately?)
> >
> 
> When it actually runs is sort of dependent on what other jobs are
> running in queue, how many CPU's are in the box etc. In practice, I
> don't think this set will materially change when the oplock break
> actually happens, unless there is a lot of stuff sitting in the events
> queue beforehand.
> 
> It may also help performance if a lot of oplock breaks come in at once
> since they won't be serialized. You could have one running for each CPU
> you have.
> 
> Another thing we could consider is using the new slow_work stuff that
> dhowells added recently. An oplock break task could take a while to
> run if there is a lot of data to be flushed, so that may be a better
> scheme (though it does add a build-time kconfig dependency to CIFS).
> 
> LWN has a good writeup on the slow_work infrastructure:
> 
> http://lwn.net/Articles/329464/
> 

Here's a quick conversion to slow_work that I did this morning. This
patch applies on top of the set I sent yesterday. Tested with
connectathon and seems to work correctly (sniffing traffic showed the
oplock break response go out onto the wire).

I like this better than queueing to the events queue -- seems less
likely to block more critical tasks.

Thoughts?
-- 
Jeff Layton <jlayton at redhat.com>


More information about the linux-cifs-client mailing list