[RFC] [WIP] tevent/glib integration

Ralph Boehme rb at sernet.de
Tue Jan 26 11:44:44 UTC 2016

Hi Noel,

On Tue, Jan 26, 2016 at 10:34:51AM +0000, Noel Power wrote:
> On 26/01/16 08:05, Ralph Boehme wrote:
> > On Mon, Jan 25, 2016 at 07:07:41PM +0000, Noel Power wrote:
> [...]
> >> But doesn't it still suffer from the same problem of being throttled by
> >> tevent(s) one event processed per loop iteration policy?
> >> e.g. it depends on either a timer or one of the glib fd(s) (with
> >> available events) being monitored to fire, but any number of ready
> >> non-glib related timers or fds could be queued for processing before and
> >> in the s3 case a tevent_fd that fires is put to the back of the list
> >> after firing :/.
> > timer, immediate and signal events are rare compared to fd events, so
> > I wouldn't worry about those.
> >
> > fd event sources are maintained in a list in tevent. When checking
> > pending fd events the list is walked from the start and the first
> > pending event source is processed and then migrated to the end of the
> > list. This prevents us from a single fd starving other event sources.
> I understood that but it is for precisely this reason that I think in a
> busy server with many clients hitting it that the glib fd (or the epoll
> fd) could be shifted as you describe & not get fired for many
> iterations

Yes, but this applies equally to a glib context when all client fds
would be attached to it.

> > *and* we keep polling the g_main_context until the returned timeout is
> > greater then 0 and g_main_context_check() returns false.
> like I mentioned that worries me, it's not what glib does itself (iirc
> it wasn't what 'g_main_context_iteration' in whatever glib source
> version I looked at back a couple of weeks ago)

please read the comment at:

Hope that explains it.

> and it is not the way it
> seems to suggest the steps for integrating an external loop should work,
> I would be very wary of depending on using the glib api in such a  way,
> I would expect such event loop processing code to be very sensitive to
> how it is used (and using it in an unexpected way might work now, might
> not work later or may introduce some bad behaviour in some scenarios)
> either way there is a risk for the rug being pulled. It's even possible
> that you might favour glib unfairly against tevent by looping many times
> per outer tevent loop. I guess I should step back and stop harping on
> now  I feel that I am just continually putting negative opinions and
> that isn't my intention at all and that just makes me feel bad that is
> the extent of my contribution

Don't worry, this discussion is necessary and I appreciate your

> >> So I still fear it could quite easily take quite a few
> >> tevent_loop iterations (and that situation worsening as more & more
> >> normal events & timers are added by a samba process) before available
> >> glib events will get processed and the glib context released, granted
> >> when it does get processed it does process *all* glib events which is a
> >> significant improvement. Presumably this would be the same with the
> >> extra epoll source tacked on.  I don't know :/ in a busy smbd process
> >> for example glib won't afaics get a fair slice of the action,
> > I'm quite confident it well for the reasons described above and I'm
> > willing to test-drive it with the Spotlight RPC server. :)
> is that a single client per rpc instance model ?

By default yes.

> if so yeah, might work
> well, at the moment my own use case (at the moment) is a single daemon
> where there could be many simultaneous clients [...]

I have this use case as well in mdssd (the external mdssvc RPC server,
this is an option, by default the mdssvc service runs inside smbd),
but there we use a configurable number of worker processes not a
process per client.

I think we have to fix your implementation. :)

> do I get you correctly that you intend to changed the
>     do {
>         gok = g_main_context_check(...)
>     } while(true);
> it's just with the updated branch I don't see such a change


Hopefully this explains it.


SerNet GmbH, Bahnhofsallee 1b, 37081 Göttingen
phone: +49-551-370000-0, fax: +49-551-370000-9
AG Göttingen, HRB 2816, GF: Dr. Johannes Loxen

More information about the samba-technical mailing list