[dm-devel] RAID4 with no striping mode request

Heinz Mauelshagen heinzm at redhat.com
Tue Feb 14 21:27:12 UTC 2023


On Tue, Feb 14, 2023 at 8:48 AM Kyle Sanderson <kyle.leet at gmail.com> wrote:

> > On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <john at stoffel.org> wrote:
> >
> > >>>>> "Kyle" == Kyle Sanderson <kyle.leet at gmail.com> writes:
> >
> > > hi DM and Linux-RAID,
> > > There have been multiple proprietary solutions (some nearly 20 years
> > > old now) with a number of (userspace) bugs that are becoming untenable
> > > for me as an end user. Basically how they work is a closed MD module
> > > (typically administered through DM) that uses RAID4 for a dedicated
> > > parity disk across multiple other disks.
> >
> > You need to explain what you want in *much* beter detail.  Give simple
> > concrete examples.  From the sound of it, you want RAID6 but with
> > RAID4 dedicated Parity so you can spin down some of the data disks in
> > the array?  But if need be, spin up idle disks to recover data if you
> > lose an active disk?
>
> No, just a single dedicated parity disk - there's no striping on any
> of the data disks. The result of this is you can lose 8 data disks and
> the parity disk from an array of 10, and still access the last
> remaining disk because each disk maintains a complete copy of their
> own data.


...which is RAID1 plus a parity disk which seems superfluous as you achieve
(N-1)
resilience against single device failures already without the later.

What would you need such parity disk for?

Heinz

How the implementations do this is still expose each
> individual disk (/dev/md*) that are formatted (+ encrypted)
> independently, and when written to, update the parity information on
> the dedicated disk. That way, when you add a new disk that's fully
> zero'd to the array (parity disk is 16T, new disk is 4T), parity is
> preserved. Any bytes written beyond the 4T barrier do not include
> those disks in the parity calculation.
>
> > Really hard to understand what exactly you're looking for here.
>
> This might help https://www.snapraid.it/compare . There's at least
> hundreds of thousands of these systems out there (based on public
> sales from a single vendor), if not well into the millions.
>
> Kyle.
>
> On Mon, Feb 13, 2023 at 11:40 AM John Stoffel <john at stoffel.org> wrote:
> >
> > >>>>> "Kyle" == Kyle Sanderson <kyle.leet at gmail.com> writes:
> >
> > > hi DM and Linux-RAID,
> > > There have been multiple proprietary solutions (some nearly 20 years
> > > old now) with a number of (userspace) bugs that are becoming untenable
> > > for me as an end user. Basically how they work is a closed MD module
> > > (typically administered through DM) that uses RAID4 for a dedicated
> > > parity disk across multiple other disks.
> >
> > You need to explain what you want in *much* beter detail.  Give simple
> > concrete examples.  From the sound of it, you want RAID6 but with
> > RAID4 dedicated Parity so you can spin down some of the data disks in
> > the array?  But if need be, spin up idle disks to recover data if you
> > lose an active disk?
> >
> > Really hard to understand what exactly you're looking for here.
> >
> >
> > > As there is no striping, the maximum size of the protected data is the
> > > size of the parity disk (so a set of 4+8+12+16 disks can be protected
> > > by a single dedicated 16 disk).When a block is written on any disk,
> > > the parity bit is read from the parity disk again, and updated
> > > depending on the existing + new bit value (so writing disk + parity
> > > disk spun up). Additionally, if enough disks are already spun up, the
> > > parity information can be recalculated from all of the spinning disks,
> > > resulting in a single write to the parity disk (without a read on the
> > > parity, doubling throughput). Finally any of the data disks can be
> > > moved around within the array without impacting parity as the layout
> > > has not changed. I don't necessarily need all of these features, just
> > > the ability to remove a disk and still access the data that was on
> > > there by spinning up every other disk until the rebuild is complete is
> > > important.
> >
> > > The benefit of this can be the data disks are all zoned, and you can
> > > have a fast parity disk and still maintain excellent performance in
> > > the array (limited only by the speed of the disk in question +
> > > parity). Additionally, should 2 disks fail, you've either lost the
> > > parity and data disk, or 2 data disks with the parity and other disks
> > > not lost.
> >
> > > I was reading through the DM and MD code and it looks like everything
> > > may already be there to do this, just needs (significant) stubs to be
> > > added to support this mode (or new code). Snapraid is a friendly (and
> > > respectable) implementation of this. Unraid and Synology SHR compete
> > > in this space, as well as other NAS and enterprise SAN providers.
> >
> > > Kyle.
>
> --
> dm-devel mailing list
> dm-devel at redhat.com
> https://listman.redhat.com/mailman/listinfo/dm-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/dm-devel/attachments/20230214/e31fa855/attachment.htm>


More information about the dm-devel mailing list