[PATCH]: Windows BRL Try 2

Jeremy Allison jra at samba.org
Wed Feb 18 18:01:18 MST 2009


On Thu, Feb 19, 2009 at 01:57:40AM +0100, Volker Lendecke wrote:
> On Wed, Feb 18, 2009 at 03:58:57PM -0800, Jeremy Allison wrote:
> > The logic should be identical if we just move
> > the same rescheduling/spinning code under the
> > default VFS (POSIX) implementation for Windows lock. That
> > way a filesystem that actually does the real Windows
> > lock semantics just gets the raw data it needs
> > to do "the right thing" (ie. if your filesystem
> > needs to do async then you'll have to do the same
> > timing semantics or copy the default code into
> > your VFS implementation). The upper layer async
> > queue logic only gets triggered if the VFS returns
> > NT_STATUS_MORE_PROCESSING_REQUIRED.
> 
> Have you looked at the code I've added for external named
> pipes? Here I have a np_read_send/recv pair which is always
> used. For the internal pipes I've added an immediate trigger
> to the event loop. This way we keep the code simple: The
> upper layers are *always* async, even if we get the lock
> immediately. I haven't really benchmarked that yet (named
> pipes per definition are not performance-sensitive), but my
> rough guess is that the additional malloc's don't really
> hurt too badly. And if they do, we need to make the
> immediate trigger and the async req setup faster by possibly
> using some talloc_pool tricks again.
> 
> But for the requests that *might* be async, I would really
> like to make the normal code paths cope with an async call
> in the VFS.

So where are you thinking of making the async logic ?
For lockingX, currently it would be easier to do as I
suggest and make the async point the SMB_VFS_BRL_LOCK_WINDOWS
call, and use similar logic to the open deferred code.

Or do you want to make this a test case for more generic
async changes ?

There are easier calls to do this for first instead of the
locking paths (IMHO :-).

Jeremy.


More information about the samba-technical mailing list