dm-raid vs md-raid

Phillip Susi psusi at cfl.rr.com
Mon Sep 25 19:25:42 UTC 2006


md-raid is the older pure software raid kernel driver.  dm-raid is a 
utility that scans for the configuration tables written to disks by 
hardware fakeraid controlers, and configures the device-mapper kernel 
driver to access the raid volumes.  device-mapper is the newer kernel 
driver that LVM uses to support raid functions.  Most of the modern 
cheap ide or sata "hardware raid" cards are not really hardware raid. 
They have software raid support in their proprietary drivers and the 
system bios.

 From a practical standpoint, the advantages of dm-raid with a hardware 
fakeraid card over traditional md-raid are:

1) Can boot directly from a raid5 or raid0
2) Can failover and still boot from a raid5 or raid1 with a damaged boot 
area
3) Can dual boot with windows

dm-raid however, is not very well supported yet compared to md-raid.

Anthony Wright wrote:
> I apologise for being thick, but I trying to understand the difference 
> between dm-raid and md-raid. Do these projects overlap, or do they 
> address separate problems in the same area?
> 
> Some of the information I've looked at suggests they implement similar 
> RAID functionality in different ways, while others seem to suggest that 
> dm-raid supports RAID functionality provided by device manufactures 
> while md-raid implements the RAID functionality internally. I'm getting 
> really confused, and can't find anything that explains the difference. 
>  From what I've read about dm-raid the concept of maintaining a log of 
> changes and being able to catch up the changes rather than having to do 
> a full disk rebuild sounds very attractive, and I've also seen it 
> mentioned in relation to Clustered LVM which again is attractive.
> 
> The architecture I'm trying to build looks something like (improvements 
> gladly accepted):
> 
> Clustered File System (GFS, OCFS2)
>               |
> Clustered LVM
>               |
> Clustered RAID (dm-raid ?)
>               |
> Networked Disk (GNBD, NBD, AoE)
> 
> The aim being to build a fault tolerant, clustered file system where I 
> can add a machine to increase file system performance, I can add disks 
> to increase storage space, and the whole system can continue to operate 
> if a disk or machine fails (assuming you've correctly configured your 
> RAID).
> 
> Thanks,
> 
> Tony Wright




More information about the Ataraid-list mailing list