Opteron Vs. Athlon X2

Bryan J. Smith b.j.smith at ieee.org
Fri Dec 9 02:56:01 UTC 2005


Bill Broadley <bill at cse.ucdavis.edu> wrote:
> Your impression is wrong.  For a pretty reasonable
> explanation check:
> 	http://en.wikipedia.org/wiki/Pci-e

Actually, I don't always trust Wikipedia, so I hit the PCI
Standards Group (PCISG).  Sure enough, you were correct:  
  http://www.pcisig.com/news_room/faqs/faq_express/  

250MBps in each direction.
So PCIe x8 is, indeed, 2.0GBps + 2.0GBps.
So I stand corrected.

But, FYI, the Areca 1200 series uses the Intel IOP332 which
actually uses a PCI-X to PCIe bridge internally, with the
hardware on the PCI-X side
(http://developer.intel.com/design/iio/iop332.htm).  So I'm
sure they are getting no where near that.  But it's quite
enough to handle any stream of data, regardless.

> Because the channel is full duplex it's easier to get a
> higher percentage of the full bandwidth (unlike PCI-x.)

Yes, I've noted a few benchmarks of the Areca 1100 series
(PCI-X 1.0) versus the 1200 series (PCIe x8) showing the
latter to be 5-10% faster.

> We have had this discussion before.  Not sure how a few
> hundred MB/sec of I/O is supposed to eat up 4GB/sec of
> Pci-e bandwidth. 

Because you're not writing several, additional and redundant
copies to the disk in the case of RAID-1 or 10, or pushing
every single bit from memory to CPU back to memory just to
calculate parity.  It may seem like 200-300MBps doesn't make
a dent on I/O or CPU interconnect that is in the GBps, but
when you're moving enough redundant copies around for just
mirroring or XORs, it does detract from what else your system
could be doing.

I agree it's getting to the point that hardware RAID is less
and less of a factor, as the I/O and CPU-memory interconnects
can cope much better.  But before PCI-X (let alone PCIe or
Opteron/HyperTransport) and multi-GBps memory channels it was
pretty much a major factor.

> Keep in mind that PCI-e isn't shared.  To talk to for
instance
> another PCI-e device you are using seperate lines (again
unlike
> pci-x).

I know, that's why I want to see more PCIe hardware RAID. 
The Areca's are very nice, since Intel offers the IOP332 (and
IOP333) with a built-in PCI-X to PCIe bridge (although it
would be nice if they were PCIe native), which they use . 
Supposedly the 3Ware/AMCC 9550SX PowerPC controller is also
capable of PCIe, like 

> So PCI-e is a point to point duplex connection, reads and
> writes from multiple devices do not compete for bandwidth,
> nor do they have to negotiate for the bus.

Yes, I know, that's why PCIe is a great option short of
having dual (or even quad, let alone 6) PCI-X channels.

Any Opteron solution with the AMD8131 has 2 PCI-X channels. 
The HP DL585 has 4, the Sun Sunfire v40z has 6 (4 slots at
the full 133MHz PCI-X.  There is also the AMD8132 which is
PCI-X 2.0 (266MHz), although it's adoption is surely to be
limited by the growing adoption of PCIe.

Once PCIe x4 and x8 cards become commonplace (which is almost
true), there will be little need for PCI-X other than legacy
support.  I'm still hoping to see a low-cost PCIe x4 SATA
RAID card, 2 or 4 channel, because the current entry point
seems to be $400+.

> To avoid a repeat of previous arguments please post ACTUAL
> numbers showing the superiority of hardware RAID.  I don't
> deny it's possible, but without real numbers the rest is
hand
> waving.  I've sustained well over a GB/sec of I/O with an
> opteron system, I've not experienced the "hogging I/O"
problem.

Most of the reviews I've seen have not done so.  At the most
the CPU utilization and RAID-5 performance is where software
RAID can't keep up, because they are trying to push down
redundant copies (RAID-1 and 10) or deal with pushing a lot
of data through the CPU just for XOR, and can't meet the
Areca 1100/1200.  There were some reviews at GamePC.COM in
the summer/fall, but they leave much to be desired
(especially since their 3Ware 9500S tests were running with
the old, buggy firmware).

I'm still waiting for someone to show off a quality,
multi-GbE setup with dozens of NFS clients, and some
sustained, multi-client bonnie benchmarks.  I don't deny that
a quality dual-Opteron with these new PCIe x4/x8 cards are
making software more and more viable versus hardware.  I
would really like to see a serious set of benchmarks myself
to show how much the gap has widened (if not been virtually
eliminated).

> The linux drivers seem fine, I've not played with the
> user-space tools, so far just the web interface.

How is the web interface?
What do you have to install on the host to get to it?

> I know they are out there but I've not used them.  Can
> anyone else on the list comment?



> With software RAID I can't tell the difference between
> 3ware and areca,

You mean you're using the Areca for software RAID?  You're
throwing away all it's XScale power?

If so, why don't you just use a cheaper card with 4 or 8 SATA
channels?  I noticed HighPoint now has one for PCIe, and
Broadcom should as well (or at least shortly).

It would probably be better performing because the XScale
won't be in the way.  Same deal on PCI-64/X when anyone uses
3Ware with software RAID, you'd be far better off going with
a Broadcom RAIDCore if you're doing software RAID.  The
Broadcom doesn't have an ASIC/microcontroller between the bus
and channels.

E.g., the most common reason I see people say they went 3Ware
was for hot-swap support.  But hot-swap doesn't work well
_unless_ you use its hardware RAID, hiding the individual
disks from the OS behind it's hardware ASIC.  That seems to
be a repeat issue.

> I've don't have any extensive production use of hardware
> RAID on either.  Not since I lost a few filesystems to a
> buggy 3ware hardware raid driver back in the 6800 days.

If you mean the RAID-5 support, 3Ware was _stupid_ to update
the 6000 series to support RAID-5.  It was _only_ designed
for RAID-0, 1 and 10 -- hence why the new 7000 series was
quickly introduced.  But they did it to appease users, not
smart IMHO.

> Of course 3ware has gotten much better since then.

I don't think they've ever been "bad," but they've done some
_stupid_ (technically speaking ;-) things at times.  Even I
_never_ adopt a new 3Ware firmware release until I've seen it
"well received."  E.g., the 3Ware 9.2 firmware had a bug that
killed write performance if you had more than 1 array.


-- 
Bryan J. Smith                | Sent from Yahoo Mail
mailto:b.j.smith at ieee.org     |  (please excuse any
http://thebs413.blogspot.com/ |   missing headers)




More information about the amd64-list mailing list