bonnie++ Sun Fire X2100 on Solaris 10 and CentOS -- 5-year old system (my bonnie++)

Bryan J. Smith b.j.smith at ieee.org
Sun Dec 11 02:42:01 UTC 2005


On Sat, 2005-12-10 at 18:39 +0100, Eugen Leitl wrote:
> Here's a bonnie++ benchmark ...

Run how?  Did you run it on the local system?  Over a NFS mount from 1
client?  Or over a NFS mounts from hundreds of clients?

If you not seeing a repeat theme here, it's that I've continually put
forth the point that once you start killing your system with other
traffic from numerous systems, you need to reduce all the CPU-
interconnection contention you can.  ;->

A local bonnie benchmark nets you little.  But I will see your call with
a 5 year old system with 5 year old hard drives and an almost 5 year old
3Ware Escalade 7800.  ;->

P3 850MHz with 512MiB Reg ECC PC100, ServerWorks IIILE chipset, 3Ware
Escalade 7800 (64-bit at 33MHz = only a measly 0.25GBps) with a 6-disc
RAID-10 on _old_ 5400rpm Ultra66 80GB (20GB/platter) Maxtor disks:  

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
dilbert.ovied 3000M 15090  97 38337  33 18800  12 16303  87 56843  16  96.0   0

Now look at those benchmarks.

First off, the "Per Chr" the performance is _dismal_ because of the P3
architecture and interconnect.  As you can regularly see, the CPU
utilization is pegged near max, meaning even this almost 5 year old
3Ware Escalade 7800 card would be far more capable on the Opteron!

Secondly, now look at the block operations.  Not bad CPU utilization
rates on the block modes in comparison to yours -- a system over 4 years
newer, with almost an order of magnitude greater memory and CPU
interconnect.  Now here's the kicker ...

My more "real world" rewrite block data transfer rate (DTR) is basically
3 times yours!!!  How is that?  My 3Ware Escalade is a commanding/
queuing storage controller.  Your nVidia MCP-04 SATA is not even NCQ, so
it's "dumb" ATA block I/O.  But even NCQ might not be enough --
especially _not_ when it comes to software RAID, because NCQ is for
individual disks, not the cohesive volume.

You could chalk it up to the fact that I have 6 discs in RAID-10, so the
stripe is effectively 3x (3 discs, then mirrored totally 6) as many.
But these are disks almost 5 years old!  Over 1/6th the density of your
drives!  It's the command queuing of the card.  It's that damn 64-bit
ASIC on the 3Ware that is just off-loading so much from the CPU -- look,
only 12% CPU utilization!  I know you only had 2%, but this is an _old_
P3 850MHz!  And I'm rewriting 3x as much as you are!

Now I could reconfigure the server with newer Seagate 7200.8 200GB disks
in RAID-10.  I have 6 right here -- and I'm planning to put in the
system soon.  Until then, these are with almost 5 year old 5400rpm,
Ultra66 80GB (20GB/platter) Maxtor drives.  In reality, I'm getting only
15MBps/disc, which sounds about right for the technology period of these
disks.

-- Bryan

P.S.  Of course, I'd love to show off an 8-disc RAID-10 volume on a
3Ware Escalade 8506-8 or 12 in an Opteron platform that I typically
deploy for clients.  But I don't have one in my house, nor at my current
place of work (as of 2 months ago).  I now work for someone else, and
it's all small-form factor embedded work at a small company.


-- 
Bryan J. Smith   mailto:b.j.smith at ieee.org
http://thebs413.blogspot.com
------------------------------------------
Some things (or athletes) money can't buy.
For everything else there's "ManningCard."





More information about the amd64-list mailing list