using virtual synchrony for CTDB

Steven Dake sdake at
Sat Oct 7 00:22:54 GMT 2006

On Sat, 2006-10-07 at 08:56 +1000, tridge at wrote:
> Steven,
>  > In raw throughput terms, vs can deliver approx 40k messages per second
>  > and does indeed saturate a GIGe network, whereas (over a switched
>  > network) point to point can handle 1600 exchanges per second between two
>  > nodes.  This scales to multiples of 1600 between multiple nodes
>  > depending on switch bandwidth availability.
> Those numbers don't look good I'm afraid. Between two nodes I measured
> 31000 messages/sec on gigabit (both with TCP and UDP). With some
> fancier hardware and MPI we measured around 170000 messages/sec with
> netpipe.
> Maybe you could try the test code at
> on your nodes? I'd
> be curious how close you get to the 31000 we measured.
> To use the tcp2 example, compile on two hosts and run:
> on host1:  ./tcp2 host2 2000 2001
> on host2:  ./tcp2 host1 2001 2002
> it's very primitive code, but is useful for basic throughput on tcp
> and udp. Also does unix domain sockets.
> Note that I get about 17000 messages/sec on my 100 MBit switch at home
> between my laptop and a server box. That uses a 1 byte payload. With a
> 64 byte payload I get about 14000 messages/sec on 100 MBit ethernet.
> So 1600 exchanges/sec (presumably 3200 messages/sec for coomparison
> with my test) is very slow for gigabit.
> Cheers, Tridge


Its been some time since I ran my original request/response test.  It
appears linux 2.6 kernel has improved significantly in this type of

My results are 30k/sec with your test program.  I have some notes below
about your test benchmark:

Your test program opens two sockets one for reading and one for writing.
I don't know if this is your intended design, but not typical of a
client server application.  More likely you would see one socket for all
communication (reads and writes).  That is how I wrote my original app.
In this case performance drops in half.

Your test program also does not test the full exchange cycle, but
measures ops as the number of read and write operations per second.
Instead, I've removed this multiplier since it doesn't really measure

I have put it at and to
match my original app design that tested request and responses.

The -s is the server and the -c is the client.

slickdeal# tcp-s
shih# tcp-c slickdeal

I get about 6k exchanges in this scenario on my gige network with jumbo
frames amd 9000 MTU.

Your test also tests 1 byte packets.  In this scenario with 3 nodes cpg

527839 messages received     1 bytes per write  10.001 Seconds runtime
52777.113 TP/s   0.053 MB/s.

performance would be better if ipc weren't used but a plugin were used
directly inside openais as ipc requires several context switches.


More information about the samba-technical mailing list