[K12OSN] Need Help

Les Mikesell les at futuresource.com
Tue Jul 10 19:30:43 UTC 2007


Terrell Prudé Jr. wrote:
>> Write access is considerably slower on RAID5 and it tends to lock your
>> heads together even for reads.  I've always liked RAID1 for the simple
>> reason that if everything is broken except one disk you can still
>> recover the data it held.  Plus if you do it in software you don't
>> have to worry about having to match the controller to read on a
>> different machine.
> However, RAID 1, by definition, is not scalable beyond two disks.

That's actually not true - linux md will happily mirror more if you 
want.  I keep my backuppc archive on a 3-member set where one is an 
external firewire drive that is periodically added and removed after the 
sync completes.  The rest of the time it is equally happy with two 
working members.

> So
> you're in a very similar situation (losing one disk and still going)
> here as you are with RAID 5.

The difference is that a RAID1 will still run at full speed with one 
drive where RAID5 speed will drop drastically as it has to reconstruct 
everything from parity.  But my first point was that even if everything 
in the machine melts except one drive with software raid you could 
connect that to about any controller and get the files it contained.


> And as for RAID 5 locking the heads
> together even for reads, that may well depend on your specific RAID
> card.  We haven't seen any evidence of that with our systems at work. 
> However, it might well be true in some implementations, maybe to include
> software RAID.

If you look at how it stores the data, I don't see how it can avoid 
having to make multiple heads seek to get it back.  On raid1, reads can 
be completely independent.

>>> That said, RAID 5 kicks RAID 1 in the delicate parts when it
>>> comes to performance.  Again, we're back to, say, six or eight spindles
>>> vs. two spindles; no contest.
>> That's not necessarily true.  If you configured those 8 drives in
>> RAID1 pairs, you'd have 4 independently seeking places that could be
>> writing at once and all 8 would be independent for reads.  The trick
>> is to arrange your data across the partitions so they are likely to be
>> used simultaneously.  These days you could just combine the RAID1 sets
>> into one LVM, though.
>>
> You're no longer talking about RAID 1, though.  You're talking about
> RAID 10.

No, I'm talking about multiple partitions, mounted to distribute the 
load, like having /, /var, and /home as separate mirrored partitions, 
each filling a drive.  Or in the LVM case, combined in a more flexible 
way than RAID10.

>>> I've run many 14-disk SCSI RAID 5 setups,
>>> and my God, they were quick!!  Yes, I'm assuming a real hardware RAID
>>> card here; I generally don't recommend software RAID, no matter which
>>> RAID level you use.
>> Software RAID1 works very nicely and does not add much overhead on
>> SCSI where there is not much CPU interaction anyway.  I probably
>> wouldn't do RAID5 in software.
>>
> Largely true with SCSI; I wasn't specific enough in that second
> sentence.  Oh, how I wish all disk drive interfaces were SCSI--that
> would solve plenty of problems!  But even with SCSI, we're back to the
> scalability issue; RAID 1 cannot, by definition, scale beyond two
> spindles, so if you want larger capacity, you've either got to buy two
> *huge* drives or go with RAID 5. 

Or use multiple mount points, or use LVM.  With LVM you lose the ability 
to recover files from a single drive but don't have to worry so much 
about balancing the space usage yourself.

-- 
   Les Mikesell
    lesmikesell at gmail.com




More information about the K12OSN mailing list