No subject


Mon Dec 1 12:27:21 GMT 2003


2.4.18 kernel) using 3 x 60GB WD 7200 IDE drives on a 7500-4 controller I
could get peak I/O of 452  MBytes/sec, and a sustainable I/O rate of over
100 MBytes/sec. That is not exactly a 'dunno' performance situation. These
tests were done using dbench and RAID5.

Let's get that right:

100 MBytes/sec == 800 Mbits/sec, which is just a tad over 100 Mbits/sec
(the bottleneck if you use 100-Base-T as the nic).

In actual CIFS benchmarking tests over 1Gb ethernet, between two identical
machines I have clocked sustained I/O at over 70MBytes/sec, that's still
70x8 = 560 Mbits/sec - still just a tad more than 100Mbit/sec ethernet can
handle.

So, if you want to give your MS windows clients breathing space and
performance, at least between the etherswitch (a cheap device today) and
the samba server, use 1Gb ethernet. Hence, my recommendation.

>  > If price point is an issue, check out the Tyan K7 2462 motherboard. You
>  > might like to deck it out with two MP2G+ CPUs (cheap but effective). I'd
>  > look at 4 IBM 80GB+ 7200rpm IDE drives off the RAID storage (on the 3WARE
>  > 7500-4 controller) and one 60GB 7200rpm IDE boot drive on which I'd
>  > install my OS.
>
> For the boot drive I would use a 40GB or 60GB RAID1 using a 2-port 3ware
> card.  There's nothing quite like the hell of that lone hard drive going bad
> at an inopprotune time.

Granted that redundancy on the boot drive is nice, you could run it off a
RAID controller, but then it you ever update the OS, and happen to lose
your RAID driver you are kind of HOSED!

Better to use a hardware IDE mirroring solution that totally hides the
fact behind a standard ATA interface. Given the complexity of that, I'd
risk having the boot drive not mirrored.

You can always use the Linux HD driver to mirror two IDE boot drives, and
use that to provide some redundancy. Again, your mileage may vary on this
approach.

> Watch out for the hardware RAID5 on those 3ware cards.  The performance on
> those is... disappointing to say the least.

Really? What was your setup? I too found them disappointing in 32-bit
slots. But in the Tyan K7 64-bit 66MHz PCI slots they simply roar!! Oh, I
did have to mess around with the driver. I found the driver provided by
3Ware out-performed the standard Linux kernel one significantly in 2.4.18
kernel.

>
> On one of my servers switching from hardware RAID5 to software RAID5 tripled
> (!!!) throughput.  I went from 15MB/sec writing to 50MB/sec writing and
> 30MB/sec reading to 75MB/sec reading (aka a saturated PCI bus).  I expect it
> to go faster, at least when reading, once I put the controller (I was using
> a 7500-4) in a 64-bit slot.

64-bit PCI at 66 MHz => 64 X 66 /8 = 528 Mbytes/sec at 100% bandwidth
saturation and 0 latency.

Your 75 MB/sec seems VERY poor if it was in a 64-bit slot, but is approx.
what I got in a 32-bit PCI slot. The Tyan K7 motherboard I suggested has
64-bit 66MHz PCI slots.

FYI: The theoretical 0 latency I/O limit of a 32-bit 33MHz PCI slot is:
32 * 33 /8 = 132 Mbytes/sec. If you got 75MBytes/sec out of it that is NOT
bad. 32-Bit PCI is NOT a good solution for file and print sharing for 50+
users, considering how cheap a 64-Bit PCI solution is now.

Considering that each WD 60GB 7200 rpm drive I used is rated by the
manufacturer at a sustainable I/O rate of 37.6 MBytes/sec, with 3 drives
the theoretical sustainable I/O is 3 x 37.6 = 112.8 Mbytes/sec.

The 3Ware RAID controller has an 8MB cache on each drive, hence the peak
of 452 Mbytes/sec, not sustainable under heavy write load.

>
> An excellent solution would be to get a 7500-8 - 2 ports for booting, and 4
> of the other 6 in a JBOD for use with software RAID.
>
> If you use IDE drives with software RAID5 dual CPUs are a must.
>
>  > Consider 1Gb/s ethernet to an etherswitch that has 1 Gb/s port and the
>  > rest 100Mb/s.
>
> Gigabit could be considered overkill.

I hope I have answered this point very clearly above. Using 100-Base-T
this will be the system I/O bottleneck.

>
>  > PS: The 3WARE IDE RAID give you an I/O bandwidth of up to 452MB/s,
>  > compared with 320 Mbits/s on fastest SCSI. Big difference in
>  > poerformance!
>
> Be aware that those are pure numbers that, unfortunately, don't translate
> into the real world.

I hope I addressed this adequately above.

> My experience is that the U320 SCSI will easily give the same throughput but
> with substatianlly lower CPU usage if you use software RAID.

I compared a $15,000 SCSI RAID system with the $2,500 AMD based 3WARE
RAID5 system. True, the CPU load on the dual MP1600 was a little higher
than on the more expensive brand name system, but at what cost
difference??  Also, the 3WARE IDE RAID system ran rings around the brand
name system when considering the total aggregate I/O delivered over 1Gb
etherenet to the clients. That is after all what the clients care about!

PS: This is NOT an advertisement for an AMD based system!!! Chose your
weapon carefully and make sure it is the right one for the job at hand.
There are very good 64-bit PCI Intel CPU solutions also that should be
considered. Price is only one criteria when designing a solution and often
the higher priced, lower performing solution is the right one in the long
run when service, support, and reliability are fully evaluated.

> If you use hardware RAID they won't even be in the same ballpark.

???

You mileage may vary! ;)

Each of us goes by what we have experienced.  I can tell you it took more
effort than I care to say to get reproducible results before I went public
on my benchmark figures. None of my performance numbers were just from one
isolated test.

>  > I'd also seriously look at updating all workstations to a common OS
>  > platform. It will make your life of administration a lot more peaceful.
>
> I second this as well.

- John T.
-- 
John H Terpstra
Email: jht at samba.org



More information about the samba mailing list