The need for a special SMB receive system call

Richard Sharpe rsharpe at
Wed Sep 25 16:06:00 GMT 2002

On Wed, 25 Sep 2002, Andrew Bartlett wrote:

> Richard Sharpe wrote:
> > 
> > Hi,
> > 
> > In the course of hacking Samba to use the recv variant of sendfile I have
> > had to modify the low level routines that read in an SMB off of a socket.
> > 
> > Currently, Samba makes two system calls to reveive an SMB, one to receive
> > the length and the other to receive the rest of the SMB[1].
> > 
> > Because I wanted to leave write data on the socket for Write&X calls, and
> > have this data transferred directly to the open file in the case where it
> > can be safely done (Ie, we have an oplock on the file etc), I had to split
> > that into a minumum of three reads: One to read the length, another to
> > read to the end of the fixed header of the SMB so we can check the command
> > type, and that last one to read out the rest of the data if it is not a
> > Write&X request.
> I was wondering about a much less elegent solution:
> Why can't we do a non-blocking read on the socket, into a (large)
> buffer?

My main goal was to avoid copying the data that is to be written to the 
file out to userland and back to kernelland. That is, to avoid two 
user/kernel crossings. This is such a big win with sendfile (in terms of 
reducing the CPU utilization and thus being able to handle more clients).
While copying lots of requests at once amortizes the system call(s) across 
lots of requests, it is going to hurt latency and will still cost lots 
more CPU when handling write requests from clients.

Also, as an aside, I am not convinced that aggressive header spliting and 
zero copy is all that useful (esp considering that tricks with simply 
remapping the underlying pages to file buffers from mbufs/skbufs will only 
work if the write request is file block aligned), but do believe that 
Theory of Everything (TOE) chips have a place to play.
> We would then process the commands one-by one, until we reached one that
> had a length beyond the end of the buffer.  Then we memmove it to the
> start, read to it's end, process and start the game again.

I am trying to solve a different problem, and reduce the complexity of 
solving this problem. 

Every system call I can eliminate, or push into the kernel (as in the 
fstat call in the sendfile code) means more CPU for yet another client and 
gets us closer to handling 10,000 clients :-)
> What am I missing here... (I'm sure there must be somthing).
> Andrew Bartlett

Richard Sharpe, rsharpe at, rsharpe at, 
sharpe at

More information about the samba-technical mailing list