FW: [PATCH] Stop packet_recv greediness, stop using event_add_timed

Sam Liddicott sam at liddicott.com
Tue May 19 06:43:35 GMT 2009


Volker Lendecke wrote:
> On Tue, May 19, 2009 at 07:06:06AM +0100, Sam Liddicott wrote:
>   
>> The ordering isn't purely dependant on eventlib.
>>
>> Currently the ordering is
>> 1. Expire timers (and 0 timers)
>> 2. Socket fd, client decides if read or write takes priority.
>> 3. Timers that timed out while waiting for (2) are picked
>> up as (1) next time around.
>>
>> It's not even really a timed event, it just uses timed
>> events to implement. In the ordering it needs to be
>> implemented at the same priority as socket reads - which
>> for server sockets is after socket writes, and for client
>> sockets is before socket writes. So packet_recv or tevent
>> can't know how to prioritize it, only the socket owner
>> knows.
>>
>> Consider that I didn't address the DOS scenario Volker
>> raised on IRC, so perhaps if instead of trying to leave
>> one byte, what if I try to read only the correct number of
>> bytes? This means reading a very few bytes until the
>> socket owner can work out how many should be read.
>>
>> Of course such a patch would be almost identical, but this
>> time it would follow a less obnoxious strategy and also
>> avoid DOS.
>>
>> What do you think?
>>     
>
> For this as said on irc I'd like to see a speed comparison.
>   
/me bangs head on floor.

I know, but as my patches seem to get rejected for reasons other than 
they don't do what they say they do I thought I'd rather tackle the 
rejection problem first.

I don't really want to spend time getting data only to find that I got 
the wrong data and when I get the right data to find it's the wrong patch.

We'd already considered A Bartletts suggestions, and how it doesn't 
work; we'd considered adding extra fields to libevent so that the 
consumer can signify that it didn't consume all of the data even though 
it may have read all of the data - so that libevent could fake a select 
on it; and this would be better than using a timer to fake it, but it 
would also be very intrusive and complicated.

So, given that we've spent a lot of time thinking about it over here, 
what I'm trying to ask about this patch is: /if it doesn't spoil 
performance/ is it the sort of patch that won't get rejected.

As for the speed comparison, do you want a comparison of a regular 
server process doing certain tasks or do you want the timings for a pure 
read loop reading a few megabytes of samba packets and doing no other 
processing?

 From my point of view I don't care if the performance is worse, the 
fact that server responses are queued up until the server request queue 
is empty has a worse effect on performance and memory use.

BTW: We also consider that the queueing of resposes could check the 
socket for writability and immediately write as many queued packets as 
it can. It hurts that packets can't hit the wire until the event loop is 
regained.

Ultimately we suspect that libevent needs reworking so that it can 
consider the state of all sockets and timers and then deliver them in a 
global priority order, instead of taking each fd in turn.

Sam


More information about the samba-technical mailing list