[linux-lvm] Bypassing LVM Restrictions - RAID6 With Less Than 5 Disks
atlief at icloud.com
Mon May 2 19:22:30 UTC 2022
Thanks for the response.
John> Alex> I have 4 disks that I’d really like to put into a RAID6. I know about RAID10, but it wouldn’t work well for me for several reasons.
John> Can you explain those reasons? In general, RAID10 gives you only 50% capacity, but much improved performance over RAID5/6 in terms of read/write performance.
John> But if you want to be able to handle the failure of any two disks in your RAID6, then I can understand your decision.
1. Resilience to any two disk failures/inconsistencies
2. Ability to safely and easily add single disks in the future† (Having 5 disks with my desired resiliency would require RAID6)
3. Ability to safely and easily switch between 1-disk-resiliency and 2-disk-resiliency in the future† (Going from a clean raid6_ls_6 to a clean raid5_ls is extremely easy by comparison)
†All conversions need to be online. With RAID10 I’d need to go from RAID10 -> RAID0 -> RAID5 (-> RAID6). This process is both lengthy and vulnerable to any single disk failure during part of the conversion.
John> Alex> Buying another disk would also be a waste of money because I don’t need 3-disks-worth of usable capacity.
John> That's fair.
John> Alex> I know there was a question regarding this a few years ago, and the consensus was to not natively support that configuration. I can respect that (although I would urge you to reconsider), but I’d still like to do it on my machine.
John> I would instead build your RAID6 using MD, and then layer LVM on top of it. It works, it's solid and it runs really well.
John> Alex> So far I’ve tried removing the restrictions from the source code and recompiling (I don’t know C, but I’m familiar with general code syntax). I’ve slowly gotten further in the lvconvert process, but there seems to be many similar checks throughout the code.
John> If you don't know the code, then you're not going to get working RAID6 up and running any time soon.
John> Alex> I’m hoping someone could point me in the right direction towards achieving this goal. If I successfully bypass the user-space tool restrictions, will the rest of LVM likely work with my desired config? Would you be willing to allow the --force option to bypass the restrictions that are not strictly necessary, even at the expense of expected stability? Is there anything else you could suggest?
John> I really can only suggest you setup RAID6 using the MD raid tools (mdadm) and then create your LVM PVs, VGs and LVs on top of that. It really works well.
John> Yes, you now need to have another tool to manage another layer, but since the MD system is well tested, reliable and just works, then I would go with it as the base.
I actually used to use MDADM with LVM on top. I switched to pure LVM for simplicity and per-disk host-managed integrity-checking. I don’t know if MDADM has since gained the ability to correct single-disk inconsistencies, but without per-disk integrity-checking it would be technically impossible to do this if 1 disk had already failed.
More information about the linux-lvm