[PATCH] tevent and threads - infrastructure improvements - version #2

Jeremy Allison jra at samba.org
Mon Jul 27 15:54:24 UTC 2015


On Mon, Jul 27, 2015 at 05:02:43AM -0400, Simo Sorce wrote:
> 
> I like this patchset, the general flow of code looks and feels good (I'm
> using metze's lingo here :-).
> 
> I see no real issues with the architecture of the code after you applied
> metze's requested changes.
> 
> However I wonder if we shouldn't make it easier to have a return
> callback automatically called, instead of requesting the caller to
> schedule back a return function.
>
> This can be easily done later adding more APIs, but I think it would be
> beneficial to do it straight away to avoid mixed style in the code base
> later on.

Don't think that's needed. Really. If you look at the
sample code it's really easy to do (the second test
and the tutorial shows how to do it ) - you don't even
need to add a new API for it. Some communication really
is one way.

> The natural way people will want to use this is that they create a
> thread pool at program initialization and then they just fire threaded
> calls and expect a request handler to be fired back when the thread is
> done. Doing this manually will be quite burdensome and repetitive so I
> would expect to use a higher level API that goes something like:
> 
> void tevent_thread_run(tevent_thread_pool *pool, tevent_context *ev,
> 		       tevent_thread_handler *in_thread_handler,
> 		       tevent_thread_handler *return_callback,
> 		       void *private_data);
> 
> The pool could be folded into ev->thread_pool and we can do automatic
> allocation tied to a specific event handler, probably better to keep it
> private and per ev handler. 
> 
> We can add this later though if you think it is better to get your
> foundation work in first.

Yeah, this isn't a part of tevent I think.

> On the patch itself I have 3 nitpicks.
> 1. can we add a DLINK_FOREACH() macro ?

That's a separate patch to dlinklist.h :-).

> 2. in the pipe_read_handler you have
> +       while (len == 64) {
> +               char buf[64];
> +               len = read(tp->read_fd, buf, 64);
> +       };
> 
> Shouldn't this be:
> while (len > 0) {
> 	char buf[64];
> 	size_t size = read(tp->read_fd, buf, 64);
> 	if (size == -1) break;
> 	len -= size;
> }

Just seems a bit complex for just flushing
out the pipe, which is all I'm doing here.

> You could have short reads, but perhaps it doesn't matter anyway ...

It doesn't metter, I'm just flushing the
pipe and scheduling all pending requests
after.

> Also, why 64 ?

I had to pick a number - 64 seemed good :-).

> 3. In the tests you assign 'finished' and read it across threads, this
> works only if assignments/reads are atomic, but this is not guaranteed
> in your code,

No I don't. finished only gets set from the thread that
owns it. The only 2 functions that write to finished are

thread_callback() (and not the /* Called in child-thread context */
comment above it), and thread_timeout_fn(), which is also
scheduled on the child thread event context.

That's the nice thing about this pattern - the callback
is run on the *target* thread. So when master replies and
schedules an event on child, the 'finished' int gets set
when the child thread runs the tevent_immediate function,
not in the master context.

When I said this code passes valgrind --tool=drd, I
*meant* it :-).

> if we are unlucky it may cause flailing test failures on
> some architectures. Also if I read it right 'finished' is on the stack
> and you have a timeout that can make the thread terminate before the
> master writes to it, this could cause the test to write on freed/reused
> stack or segfault or similar causing also undebuggable testing failures.

See above - master never writes to it, only child
thread (who owns it for the lifetime) does.

Jeremy.



More information about the samba-technical mailing list