using virtual synchrony for CTDB
Steven Dake
sdake at redhat.com
Sat Oct 7 20:48:59 GMT 2006
On Sat, 2006-10-07 at 23:15 +1000, tridge at samba.org wrote:
> Steven,
>
> > Your test program opens two sockets one for reading and one for writing.
> > I don't know if this is your intended design, but not typical of a
> > client server application.
>
> It's in junkcode for a reason :)
>
> Actually, I used separate ports as it allows me to test the same code
> on localhost. If you specify the same port number twice, you'll get 1
> port between two hosts.
>
> And in case you're wondering, no, this is not anything like the code
> we will use in CTDB. It's just my quick and dirty "how fast is gigabit
> these days?" test. In CTDB I plan on using the Samba4 events library
> and epoll() where available.
>
> > More likely you would see one socket for all communication (reads
> > and writes). That is how I wrote my original app. In this case
> > performance drops in half.
>
> why would performance drop? This test code is fully synchronous. It
> doesn't care if its one socket or two.
>
> > Your test program also does not test the full exchange cycle, but
> > measures ops as the number of read and write operations per second.
> > Instead, I've removed this multiplier since it doesn't really
> > measure exchanges.
>
> yes, that's why I multiplied your result by 2 when comparing.
>
> > I have put it at www.broked.org:/tcp-s.c and www.broked.org:/tcp-c.c to
> > match my original app design that tested request and responses.
>
> a few problems :-)
>
> 1) tcp-s.c has commented out the TCP_NODELAY setting. That absolutely
> kills performance! It means you are sitting waiting after each send,
> with the kernel hoping another send might happen on that socket so it
> can combine it.
>
> 2) tcp-s.c has changed to a 64 byte buffer, but didn't change the
> actual read/write size (the read and write calls are still using size
> 1).
Yes i was testing difference between 64 and 1 byte buffers but the send
still uses 1 byte.
>
> > I get about 6k exchanges in this scenario on my gige network with jumbo
> > frames amd 9000 MTU.
>
> That's the nagle algorithm killing you. Re-enable the TCP_NODELAY and
> that should rise a lot.
>
This has no effect on my system either way.
Regards
-steve
> Cheers, Tridge
More information about the samba-technical
mailing list