[linux-lvm] Bypassing LVM Restrictions - RAID6 With Less Than 5 Disks

Alex Lieflander atlief at icloud.com
Sat May 7 23:14:50 UTC 2022

> On May 7, 2022, at 4:41 PM, Stuart D Gathman wrote:
>> On Fri, 6 May 2022, Alex Lieflander wrote:
>> Thanks. I really don’t want to give up the DM-Integrity management. Less complexity is just a bonus.
> What are you trying to get out of RAID6?  If redundancy and integrity
> are already managed at another layer, then just use RAID0 for striping.
> I like to use RAID10 for mirror + striping, but I understand parity disks give redundancy without halving capacity.  Parity means RMW cycles of
> largish blocks, whereas straight mirroring (RAID1, RAID10) can write
> single sectors without a RMW cycle.

I don’t trust the hardware I’m running on very much, but it’s all I have to work with at the moment; it’s important that the array is resilient to *any* (and multiple) single chunk corruptions because such corruptions are likely to happen in the future.

For the last several months I’ve periodically been seeing (DM-Integrity) checksum mismatch warnings at various locations on all of my disks. I stopped using a few SATA ports that were explicitly throwing SATA errors, but I suspect that the remaining connections are unpredictably (albeit infrequently) corrupting data in ways that are more difficult to detect.

I’ve tried to “check” and “repair” my array on multiple kernel versions and live recovery USB sticks, but the “check" always seems to freeze and all subsequent IO to the array hangs until reboot; at the moment, a chunk is only ever made consistent when its data is overwritten, so it needs to survive periodic, random corruption for as long as possible.

I also have a disk that infrequently fails to read from a particular area, but the rest of the disk is fine. I wouldn’t trust that disk with valuable data, but it seems like a perfect candidate to hold additional parity (raid6_ls_6) that I hopefully never need.


More information about the linux-lvm mailing list