<div dir="ltr"><div dir="ltr">On Tue, Feb 14, 2023 at 8:48 AM Kyle Sanderson <<a href="mailto:kyle.leet@gmail.com">kyle.leet@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <<a href="mailto:john@stoffel.org" target="_blank">john@stoffel.org</a>> wrote:<br>
><br>
> >>>>> "Kyle" == Kyle Sanderson <<a href="mailto:kyle.leet@gmail.com" target="_blank">kyle.leet@gmail.com</a>> writes:<br>
><br>
> > hi DM and Linux-RAID,<br>
> > There have been multiple proprietary solutions (some nearly 20 years<br>
> > old now) with a number of (userspace) bugs that are becoming untenable<br>
> > for me as an end user. Basically how they work is a closed MD module<br>
> > (typically administered through DM) that uses RAID4 for a dedicated<br>
> > parity disk across multiple other disks.<br>
><br>
> You need to explain what you want in *much* beter detail. Give simple<br>
> concrete examples. From the sound of it, you want RAID6 but with<br>
> RAID4 dedicated Parity so you can spin down some of the data disks in<br>
> the array? But if need be, spin up idle disks to recover data if you<br>
> lose an active disk?<br>
<br>
No, just a single dedicated parity disk - there's no striping on any<br>
of the data disks. The result of this is you can lose 8 data disks and<br>
the parity disk from an array of 10, and still access the last<br>
remaining disk because each disk maintains a complete copy of their<br>
own data.</blockquote><div><br></div><div>...which is RAID1 plus a parity disk which seems superfluous as you achieve (N-1)<br>resilience against single device failures already without the later.</div><div><br>What would you need such parity disk for?</div><div><br></div><div>Heinz</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">How the implementations do this is still expose each<br>
individual disk (/dev/md*) that are formatted (+ encrypted)<br>
independently, and when written to, update the parity information on<br>
the dedicated disk. That way, when you add a new disk that's fully<br>
zero'd to the array (parity disk is 16T, new disk is 4T), parity is<br>
preserved. Any bytes written beyond the 4T barrier do not include<br>
those disks in the parity calculation.<br>
<br>
> Really hard to understand what exactly you're looking for here.<br>
<br>
This might help <a href="https://www.snapraid.it/compare" rel="noreferrer" target="_blank">https://www.snapraid.it/compare</a> . There's at least<br>
hundreds of thousands of these systems out there (based on public<br>
sales from a single vendor), if not well into the millions.<br>
<br>
Kyle.<br>
<br>
On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <<a href="mailto:john@stoffel.org" target="_blank">john@stoffel.org</a>> wrote:<br>
><br>
> >>>>> "Kyle" == Kyle Sanderson <<a href="mailto:kyle.leet@gmail.com" target="_blank">kyle.leet@gmail.com</a>> writes:<br>
><br>
> > hi DM and Linux-RAID,<br>
> > There have been multiple proprietary solutions (some nearly 20 years<br>
> > old now) with a number of (userspace) bugs that are becoming untenable<br>
> > for me as an end user. Basically how they work is a closed MD module<br>
> > (typically administered through DM) that uses RAID4 for a dedicated<br>
> > parity disk across multiple other disks.<br>
><br>
> You need to explain what you want in *much* beter detail. Give simple<br>
> concrete examples. From the sound of it, you want RAID6 but with<br>
> RAID4 dedicated Parity so you can spin down some of the data disks in<br>
> the array? But if need be, spin up idle disks to recover data if you<br>
> lose an active disk?<br>
><br>
> Really hard to understand what exactly you're looking for here.<br>
><br>
><br>
> > As there is no striping, the maximum size of the protected data is the<br>
> > size of the parity disk (so a set of 4+8+12+16 disks can be protected<br>
> > by a single dedicated 16 disk).When a block is written on any disk,<br>
> > the parity bit is read from the parity disk again, and updated<br>
> > depending on the existing + new bit value (so writing disk + parity<br>
> > disk spun up). Additionally, if enough disks are already spun up, the<br>
> > parity information can be recalculated from all of the spinning disks,<br>
> > resulting in a single write to the parity disk (without a read on the<br>
> > parity, doubling throughput). Finally any of the data disks can be<br>
> > moved around within the array without impacting parity as the layout<br>
> > has not changed. I don't necessarily need all of these features, just<br>
> > the ability to remove a disk and still access the data that was on<br>
> > there by spinning up every other disk until the rebuild is complete is<br>
> > important.<br>
><br>
> > The benefit of this can be the data disks are all zoned, and you can<br>
> > have a fast parity disk and still maintain excellent performance in<br>
> > the array (limited only by the speed of the disk in question +<br>
> > parity). Additionally, should 2 disks fail, you've either lost the<br>
> > parity and data disk, or 2 data disks with the parity and other disks<br>
> > not lost.<br>
><br>
> > I was reading through the DM and MD code and it looks like everything<br>
> > may already be there to do this, just needs (significant) stubs to be<br>
> > added to support this mode (or new code). Snapraid is a friendly (and<br>
> > respectable) implementation of this. Unraid and Synology SHR compete<br>
> > in this space, as well as other NAS and enterprise SAN providers.<br>
><br>
> > Kyle.<br>
<br>
--<br>
dm-devel mailing list<br>
<a href="mailto:dm-devel@redhat.com" target="_blank">dm-devel@redhat.com</a><br>
<a href="https://listman.redhat.com/mailman/listinfo/dm-devel" rel="noreferrer" target="_blank">https://listman.redhat.com/mailman/listinfo/dm-devel</a><br>
<br>
</blockquote></div></div>