dbench scalability testing

Andrew M. Theurer atheurer at austin.ibm.com
Mon Mar 12 18:34:35 GMT 2001


> Also I do not know the state of the intel gigabit cards, my suggestion is to
> use acenic based cards since they have been the most tested (for example the
> tux2 benchmarks use 8 acenics with the zero copy patches).

I will have to recheck which cards we are getting.
 
> You should be able to synchronise the start of smbtortures on different
> client machines and then sum the results. In reality you shouldnt need many
> clients running smbtorture to saturate a server. For example, a single
> 333MHz cpu POWER3 as a client manages to push our servers along at 25MB/s.
> So I'd be surprised if the 8 way will max out 4 of these all sitting back to
> back on their own gigabit channel.

That's what I planned on doing (sync smbtorture).  Our clients will be
"low end" intel linux/nt desktops, each with fast ethernet.  Eventually
we will have 10 clients connected to their own switch with a Gbps link
to the server.  There will be 1 to 6 of these "sets" of clients/switch
for a possible total of 60 clients and 6 Gbps nics.  This is probably
overkill for the smbtorture, but will also be used for Netbench to do
comparisons.

> Actually, I'll round up my patch to use MSG_TRUNC in smbtorture. This
> improves client performance somewhat as we dont waste cpu cycles copying
> data between kernel and user space. (When things dont run fast enough, fix
> the benchmark :)

Great.

> For basic testing, smbtorture is much nicer to work with than 16+ windows
> machines but it would be nice to get some verification that it really
> approximates netbench at the high end. Would it be possible to do some
> comparison runs?

Yes.  I'm actually getting cygnus inetd/rsh stuff setup on the NT
clients to make the Netbench runs a little less painful.  Since I
currently only have 16 clients, I will probably use multiple engines/per
client (and multiple shares) to drive Netbench this week.  On the last
set of tests, I could not drive past ~200Mbit with 16 clients in
Netbench.  That would max out Uniprocessor and 1-P, but cpu idle time
just went up and up with almost no change in Mbit on 2-way to 8-way.  

-Andrew Theurer




More information about the samba-technical mailing list