[PATCH] Use libpcl for portable coroutines
sam at liddicott.com
Fri May 15 09:35:42 GMT 2009
* Stefan (metze) Metzmacher wrote, On 14/05/09 21:19:
> Hi Sam,
>> This is a proof of concept patch, to make using libpcl easy for smb
>> clients, so that sync functions can easily be converted to async functions.
> I added this based on tevent, see the top 4 patches in:
> I haven't posted this before, because I want to do more testing...
> Also we need to discuss if we really want them, and guide lines for
> which code parts should use them.
I notice that volker had added async_req stuff too.
Looking at your code, I have some thoughts and comments;
I see a fixed stack of 4196 - I found even 8K to be too small for what I
was doing. I think we ought to allow a size, and ought to (as you are)
allocate our own stack - but not with talloc or malloc, but using mmap,
so that RAM is only committed as required. This should help keep
co-routines cheap in such platforms that support this.
It took me a while to grasp the example at:
but it seems that the main principle is that you can nest co-routines
providing each inner one is given a completion callback by it's creator
that switches back to it's creators stack.
The risk seems to be in ensuring that the creator stack isn't activated
before the thing it waits on is complete - or that it can tell if this
occurs and switches to co_main to await the official callback.
I think co_resume() isn't generally safe to call - co_resume() doesn't
neccessarily switch to the creator coroutine which might be useful. If
a co-routine was launched from another coroutine then the two coroutines
may keep flipping between each-other, which is why I switched to
co_call(co_main) which in our case is the stack running the event loop in.
I gnash my teeth that the real co_main is static - so I had to
"discover" with co_current() when the first co_create() is called by
I think it's a shame that the active tevent_coroutine needs to be passed
to the yield function, but I see we must unless we extend libpcl to
allow user data to be attached to coroutine_t. Maybe we could nick the
first few bytes from the stack to point to the tevent_cououtine? I think
if we may be linking to other libraries that also use libpcl that
perhaps we must do as you have done...
I also prefer (as you saw in my patch) that tevent_coroutine_yield
should be called implicitly by example_ocor_recv. One consequence is
that code can be written synchronously without much thought for
co-routines, but work asynchronously as co-routines if support is
available. Also sometimes the _send and _recv functions may be hidden
behind an api and the caller might not have anything on which to yield.
I also note the continuation pattern that runs through
ntvfs_request->async_states->send_fn and smbcli_request->async.fn and
composite_context->async.fn and now tevent_req->async.fn. I've heard it
regretted that composites and smbcli_request structs aren't more similar.
If these are all reworked with the async struct at the top, perhaps
being a union with it - then co-routines can be adopted with respect to
all these types of continuations without having to be tied to tevent or
any one of them.
In the example I think the *_send() are misnamed because unlike the old
async pattern, they don't send, they are just a trampoline launch the
I dearly want to get rid of the *_send() functions in your example - the
fact that they exist at all is because there isn't a portable way to
replicate the current stackframe onto the new stack, otherwise co_create
could work like fork(). As this isn't possible, I think that this
trampoline should be minimized as much as possible as it is a
distraction from the intent of the code.
I tried to keep the trampoline to be nothing beyond the necessary
void proxy_connect_trampoline(va_list args)
co_get_va(struct ntvfs_module_context *, ntvfs);
co_get_va(struct ntvfs_request *, req);
co_get_va(union smb_tcon *, tcon);
ntvfs_co_passthrough_exit(proxy_connect(ntvfs, req, tcon));
but yet the trampoline here (by means of the passthrough macro) also
deals with co_exit and other cleaning up. The body function can then be
just a regular synchronous function.
Perhaps we have different needs in this work.
It's good that tevent integrates nicely with co-routines, but I think
that co-routines here are integrating too tightly with tevent, and
becoming too dependant on it. I think the private_data and co_call
routine should be the key interface to activating a co-routine, and not
tevent; partly because this IS the interface that already exists with
composites, ntvfs_requests and smbcli_requests.
I think that co_main is fundamental to co-routine use here. A function
may call another function asynchronously by launching the co-routine
before calling, or it may permit the other function to do this
internally. Whichever function launches the co-routine must also set the
handler to switch back on completion, but the special case arises when
the co-routine is launched from the main stack.
If the main stack launches a co-routine, then it must return signifying
that a response will follow later.
If one co-routine launches another, then the launching co-routine is
suspended until the other returns, and the invoking caller from the main
stack already expects an asynchronous response.
Are there cases where one co-routine will launch another, and expect to
be returned to before the other has completed? There may if it wants to
parallelize requests. It will have to launch all co-routines and then
enter a loop that keeps co_call(co_main) until all requests are
complete. It does not need to use tevent directly, as long as the
requests it is waiting for use tevent.
So these are my thoughts;
* that it should be simple to launch a new coroutine without using tevent
* that one day composites, smbcli_requests, ntvfs_requests and
tevent_req will share a common async union
* that trampolines be kept as tiny as C allows, and allow the body to be
written as a regular sync function
* that _recv functions do the yielding so the caller doesn't have to
(saves re-writing lots of code)
More information about the samba-technical