[SCM] Samba Shared Repository - branch master updated

Volker Lendecke Volker.Lendecke at SerNet.DE
Thu Dec 18 06:26:46 MST 2014


Hello, Poornima!

Thanks for your replies!

On Thu, Dec 18, 2014 at 07:34:40AM -0500, Poornima Gurusiddaiah wrote:
> > There is no other way to get this done? Glusterfs is a pure networked file
> > system with a library, right? Can't we implement implement the protocol
> > for read/write ourselves using sockets and tevent_fds?  Forcing threads
> > upon the users of the library just to do asynchronous I/O is some pretty
> > heavy burden on the library user.
> 
> We can implement the protocol for read/write in vfs_plugin itself using teventfds
> by using the pthreadpool, you have mentioned.

Hmm. There must be a misunderstanding here. I guess that
implementing the whole glusterfs stack based on Samba's
tevent system is a major effort that is not really realistic
here. If we did that we could completely avoid threads and
still be asynchronous. But as I said, this would be a
solution of last resort if nothing else helps.

My question is whether glfs_pread_async is an
API that Samba should make use of or whether there are
alternatives. To be honest I don't like the fact that
threads leak through callbacks. I'd much rather not call
glfs_pread_async but issue multiple simultaneous glfs_pread
calls using our pthreadpool. This way the gluster lib does
not have to do threading magic itself, it can synchronously
wait for the replies, it has enough threads available.

The much further reaching question is: Are we in the
position to ask for a different API to do async calls? There
are examples out there how to hook an event loop into a
library, Samba itself uses for example the avahi interface
in source3/lib/avahi.c. Modeled after that we've implemented
a similar interface, take a look at source3/lib/poll_funcs/.
I would imagine that gluster as a network library could take
a similar approach and be event-loop agnostic via such an
abstraction.

> > Is it really the case that the result of an async call pops up in another
> > library-created thread? That's at least unusual I'd say :-)
> 
> Yes:) the result of async call pops up in another thread, but in Samba, the
> async call pop up should be in the same thread else it will lead to corruption.
> That is why all the pthread and eventfd code.

Ok, understood. This is however completely non-obvious from
the available documentation of glfs_pread_async, which is
just not existing beyond the function prototype.

> > If the library is thread-safe, I'd rather go and wrap the normal pread
> > calls in a pthreadpool. Sync calls should not trigger new threads
> > to appear. This way we have control over the threads and can use
> > pthreadpool. Would that be a way to do it?
> 
> Yes, pthreadpool can be used to get the async behavior. Ira tried replacing
> eventfd with pipe and getting away with pthread() calls, if this doesn't work
> we might have to explore using the pthreadpool.

Ok, great. Waiting for Ira to send his patch.

> > > I can see ways to eliminate the locking and eventfd code, using pipes, but
> > > beyond that, I don't see much we can do here.  The issue there will be
> > > making sure we maintain the performance of the current implementation.
> > 
> > pthreadpool might not have the best performance for queueing, but at least
> > it does not malloc per job. It uses condition variables and mutexes,
> > the standard technique here. I've tried to get something lock-free to
> > queue jobs, but this seems pretty difficult to do. I'm willing to steal
> > ideas from libgfapi here :-)
> 
> libgfapi doesn't use lock-free mechanism to queue jobs. Glusterfs, when the top
> layer libgfapi/Fuse is removed is asynchronous by nature, the Glusterfs client 
> queues the rpc requests to the server and server wakes the poll when the rpc
> is done. libgfapi/Fuse uses mutex/cond variable to get the synchronous behavior.
> Hope this answers, if not i can elaborate more on this.

Ok, this sounds as if pthreadpool/glfs_pread and glfs_pread_async should
be in the same performance range.

> Will work on getting away with the pthread calls in the vfs module.

Thanks!

Volker

-- 
SerNet GmbH, Bahnhofsallee 1b, 37081 Göttingen
phone: +49-551-370000-0, fax: +49-551-370000-9
AG Göttingen, HRB 2816, GF: Dr. Johannes Loxen
http://www.sernet.de, mailto:kontakt at sernet.de


More information about the samba-technical mailing list