SuperMicro H8SSL-i (ServerWorks HT1000) -- providing technical information

Bryan J. Smith b.j.smith at ieee.org
Thu Dec 8 00:45:37 UTC 2005


Peter Arremann <loony at loonybin.org> wrote:
> Yeah - HGST makes the WD drives except possibly the
> Raptors.

Actually, I've all but first-hand confirmed the WD Raptors
roll off the same line as the Hitachi 10K Ultrastars.

> There are a few reports that say the Raptor line is made by
> Seagate but no one seems to know for sure.

If this is true, I'd love to confirm it.  WD shops around as
they really don't have any fab production of their own
anymore.

> There is actually a class action suite out there against
> IBM over the 75GB disks. Most people say it was because of
> too little space between the platters and therefore the
> little tollerance to shock/heat...  Do you know if the
500GB
> Hitachi drives have the same reliability issues?

I don't know, I just know _no_one_ has put 5 platters in at
7200rpm since ... until now.  With the increase in thermal
tolerances to 60C ambient, they might be able to get away
with it.

Seagate uses 4x 133GB platters in their new 500GB 7200.9.

> OK - I've one more question... Does the reliability of a
> single disk really matter much in your environment anymore?
> We run a few hundret servers with anything from ancient 1GB
> SCSI disks to newer 400GB sata drives.  Raid in one 
> form or another. There were several instances where we lost
> data. One time we had a dead director and when it died the
> EMC somehow ended up writing bad data to the drive. 

One of the key issues with hardware RAID solutions is keeping
the firmware, kernel driver and user monitoring software
in-sync.  That bit me a long time ago on an ICP-Vortex and
I've never repeated it.

> Another time, we lost data because of a backplane in a E450
> - it fried the circuit boards on several drives at once.

Hmmm, that's a good one.  Never heard of a backplane frying.

> Just recently we lost data on several SUN 3510. Seagate OEM
> fibre 15K rpm disks.  I can however not remember a single
time
> over the past several years where we lost data because of
> enough disks in a raid going bad at the same time.

I have.  I've had a 2nd ATA disk fail on an 8-channel 3Ware
card while it was in the middle of a RAID-5 rebuild.  These
were before the new crop of enterprise, commodity capacity
disks.

RAID-6 is starting to appear to combat this exact scenario.

I've typically preferred RAID-10, in addition to the
performance benefits, because it gave me almost 50% chance
that a 2nd disk failure wouldn't be the other part of the
mirror.

> If anything happened, the hot spare always kicked in and
the
> rebuilt went fine and everyone was happy. You throw out the
> bad disk and pop in a new one. 

As I put it in my engineering statistics class best long ago,
"Sigma has a way of catching up with you over time."

And damn if I didn't predict several risk scenarios that were
ignored at a few clients.  ;->

I pay the extra 10-20% for a Seagate NL35 or Western Digital
Caviar RE for 24x7 systems that have RAID.


-- 
Bryan J. Smith                | Sent from Yahoo Mail
mailto:b.j.smith at ieee.org     |  (please excuse any
http://thebs413.blogspot.com/ |   missing headers)




More information about the amd64-list mailing list