Any interest in swat enhancement
John E. Malmberg
wb8tyw at qsl.net
Fri Sep 8 13:35:34 GMT 2000
"Peter Samuelson" <peter at cadcamlab.organization> wrote:
> [John E. Malmberg <wb8tyw at qsl.network>]
> > I was assuming that one receiving pipe could accept messages from any
> > number of sending pipes that could be temporarily opened to the
> > receiving pipe while the message was being sent.
> I think you are describing Unix domain sockets. Plain old pipes have
> very low overhead but are also very limited. A pipe is half-duplex and
> has a fixed-sized buffer (say 512 bytes). It can be opened multiple
> times for reading or writing, but if so it is a simple FIFO, i.e.,
> multiple readers do NOT get multiple copies of the same data. You can
> use a single pipe as a message-passing conduit with many writers and
> one reader, but then you must have an independent method (like a
> message header) to identify each sender. Andrew said N^2 because it's
> simpler just to give each sender/receiver pair its own private pipe.
Andrew mentioned that the amount of data to be transferred was small.
Even with the signalling system, the message in the tdb must have some
delivery information for the signaled process to make sure that the message
was for it.
The signal method is probably more efficient for broadcast style messages.
For using the pipe method, if the message is larger than the fixed size
buffer, or actually more than a few bytes, it can just contain a pointer or
other reference to where the body of the message actually is in shared
memory (or a tdb?). The wrapper routine that actually retrieves the message
would know where to get it after examining the header.
For real time interprocess and inprocess communications the following scheme
There is are routines for allocating and deallocating memory blocks from a
global memory section.
When you want to send a message, you allocate a section of global memory and
populate it. Then pass it to a send_message routine with some addressing
information. This routine determines if the message is local to the system
or out on the network. If it is out on the network, it takes care of
copying it to the network buffer and deallocating the memory. If it is on
the same physical system, it merely sends a small header through a pipe to
the receiving process on where in the global memory the section is.
On the receiving side, the receiver first reads the small header, and then
either allocates a section of global memory to receive the network packet
and after populating it returns it, or it just parses the local header and
returns the pointer.
The advantage here is that for local data moves, there is really no penalty
for sending large messages around.
This is a simplification of course. In many times a central routing process
is used for handling either all routing of messages or just the off node
ones. This central routing process can become a severe bottleneck and that
is what Andrew specifically wanted to avoid. Especially if it crashes.
There is however no real need for a central routing process.
One of the big problems in the global memory system is when programs write
beyond their bounds. If you have the information used to track the
allocation and deallocation stored in the same global memory, one buffer
overrun can take out the whole thing. This method is best used when you can
establish guard bands of memory protected for no access on both sides of the
allocated memory. It has been explained to me that there is no common UNIX
system call to do this.
wb8tyw at qsl.network
More information about the samba-technical