Fedora SMP dual core, dual AMD 64 processor system

Bryan J. Smith b.j.smith at ieee.org
Wed Aug 24 03:16:30 UTC 2005


On Tue, 2005-08-23 at 18:43 -0700, Bill Broadley wrote:
> How about:
> SIIG http://www.siig.com/product.asp?catid=14&pid=1003

Still FRAID

> Promise: http://www.promise.com/product/product_detail_eng.asp?segment=RAID%205%20HBAs&product_id=156

Ahhh, now that's better.  I didn't know about it.
IOP333-based.  Still PCIe-to-PCI-X bridge, but better at sub-$400.

> Tekram: http://www.pc-pitstop.com/sata_raid_controllers/arc1260.asp

$500-600, but still an option.

> Intel: http://www.intel.com/design/servers/RAID/srcu42e/index.htm

$650+!  Same IOP332 design as the 320-2E in essence.

At least the Promise card is sub-$400.  Thanx for pointing it out.
Now I have a new option.

> Er, I've no idea what you are describing?  You are having some strange
> problems with 16-bit BIOS with pci-express cards?

Yeah, like the fact that their RAID function does _not_ work once the OS
boots.  That's because the software RAID logic is 100% proprietary
(licensed from a 3rd party), so it's not GPL.  And the vendor typically
releases only a binary version for a specific kernel.

Have you actually tried these cards in Linux with their FRAID logic?

> This is relevent to building a 64 bit fileserver how?
> Your claiming the cards I'm mentioning don't work? 

The FRAID cards don't, period.  The reverse engineered GPL drivers
(hptraid.c, pdcraid.c , silraid.c) using the GPL (ataraid.c) logic are
buggy.  They are also much slower than LVM/MD.

> Why?  What is wrong with MD?

Nothing.  I said _FRAID_ (fake RAID).

> It performs better than the hardware RAID controllers I've tried,

RAID-0?  Of course!

RAID-1 ... now you're starting to put 2x over your busses.

RAID-5?  Please!  Have you seen the overhead of pushing every single bit
up your memory-CPU interconnect?  It's not the XOR that gets you, it's
the interconnect overload.

I invite you to read up on how poor software RAID-5 performs, especially
during a rebuild.

> provides a standard interface, and actually allows me to migrate RAIDs
> between different machines and controllers in face of failure.

Again, that is a commonly used, but very _poor_ "real world"
_exaggeration_!

You have differences in LVM/MD between kernels, not to mention a _lot_
of bugs!  I have not had this so-called "bliss" with LVM/MD you have.  I
have run into complete lack of support of my LVD/MD volumes in moving
systems of any significant age.

Again, if you stick with LSI and, especially, 3Ware, they tend to
maintain full volume compatibility upward.  Far, far better than what
the Linux kernel's LVM/MD have had -- especially versus 3Ware.

> I also like adding a single line to the crontab and I get email
> whenever the RAID changes state.  

You obviously haven't used 3Ware's 3DM2 (or prior 3DM).
You really should have some experience before making assumptions.

> What is wrong with the IOP331, 332, and 333 family and the host of
> controllers based on it?

Well, for starters, the IOP331 is _PCI-X_, not PCIe.

But yes, I _was_ aware of the IOP332.  Now they have an IOP333.

I appreciate the link on the Promise card, it's the first sub-$400 (or
even sub-$500 for that matter) card I've seen with an XScale and SATA

> See above.

And I noted above.  Again, thanx for 3 of the 4 products.

> I don't recall the discussion being about hardware RAID, just RAID. 
> Are there some hardware RAID advantages I'm not aware of?

FRAID is _worse_ than LVM/MD.
You're better off using LVM/MD.

> Your choice, see above.  For smart controllers intel, promise, tekram,
> and lsi seem to offer fine cards for building a pci-e based server for.
> Wasn't that the whole point?

Yes, and I appreciate you pointing them out.
Especially the Promise product.

> Please post RAID-5 numbers using a 7000/8000 controller showing greater
> than 130 MB/sec bandwidth. 

At reads?  Oh, you bet!

Now you show me software RAID-5 doing sustained writes (that means
pushing through _significantly_more_ than memory so it is true disk I/O)
over 100MBps without sending the I/O interconnect right into useless.

That's the problem with software RAID-5 -- you're pushing every single
byte into the CPU's interconnect, rendering your system completely tied
up.

> Prove it, please post numbers. 

Do you even know how software RAID-3/5/5 works in software?
Versus a system where the RAID-5 is done on-board?

Dude, read up on some things yourself.

> You claim that it's often the I/O bus that is the bottleneck, I claim with
> hardware raid it's usually the hardware RAID controller itself that is
> the issue.

On old i960 designs?  Of course!

But on various ASIC and XScale designs?  I think not!

Otherwise, why wouldn't we use PCs as switches and routers instead of
devices with intelligent switch fabrics?

You obviously don't know the first thing on how software-driven storage
works on a generic PC versus an I/O processor.

> I'm open to counter examples.  I'd suggest bonnie++ or iozone if you\
> want a benchmark to measure bandwidth.

It's more than that.  You have to measure the I/O being eaten up by the
extra software operations.  If you're bogging down your system with
redundant I/O just to support the operation, that doesn't show on your
CPU.  But it does show up on your performance.

Dude, I've built too many file and database servers to even continue
this.

> Sure, and I've seen 3x improvement ditching multi $k hardware raid
> controllers and using software raid on a cheap controller.

Depends on how crappy the controller is.

> In fact when
> Dell was pushed as to why they couldn't manage the write to a 50MB/sec
> specification on a rather expensive disk array they admitted they used
> software RAID internally.

This is because they are using an _old_ i960 that can't break 50-60MBps.
Don't compare 10 year-old i960 designs that Dell _stupidly_ ships to
3Ware's ASIC or newer XScale processors.  I can't help it if Dell is
stupid.

> To meet the spec they refunded the price of
> the expensive hardware RAID controller and replaced it with a "FRAID".

Dell is stupid, not my fault.

The first thing I tell people is that its _better_ to use LVM/MD than to
use an old i960 or IOP30x-based design.

If you don't at least have an ASIC designed specifically for storage, or
a newer microcontroller like an Intel XScale (IOP31x+ -- typically
IOP33x these days), then LVM/MD _is_ better.

Again, you obviously aren't "current" on ASIC/microcontroller
developments over the last 5 years.

> I'm open to using (and paying for) hardware RAID if it gave me more
> performance in exchange for learning the custom command line, web
> interface, serial interface, java interface, BIOS interface, or even the
> occasional windows only client that is required to configure, reconfigure,
> and/or monitor the RAID.  In my experience this hasn't been the case,

Because you've been deploying i960/IOP30x designs.
I don't blame you for assuming incorrectly.

> I'm more to open to counter examples though.

Actually, 2 of those 4 you send me were IOP32x/33x designs and quite
good.  Another one might be an IOP32x/33x as well.


-- 
Bryan J. Smith     b.j.smith at ieee.org     http://thebs413.blogspot.com
----------------------------------------------------------------------
The best things in life are NOT free - which is why life is easiest if
you save all the bills until you can share them with the perfect woman




More information about the amd64-list mailing list