[vdo-devel] vdo 8.1.x - any documentation?

Andrew Walsh awalsh at redhat.com
Wed Feb 2 01:06:55 UTC 2022


On Tue, Feb 1, 2022 at 4:52 PM Gionatan Danti <g.danti at assyoma.it> wrote:
>
> Il 2022-02-01 15:49 Andrew Walsh ha scritto:
> > Please don't apologize!  I have had this question in my inbox, but it
> > keeps slipping further down the page.  I apologize for that.
>
> Hi Andrew, thank you for the kind reply. I really appreciate it.
>
> > Currently, the LVMVDO implementation is limited to a single logical
> > volume atop the VDO storage.  With that in mind, the same kind of
> > workaround can be achieved, assuming you enable scan_lvs.  It makes
> > for a slightly complicated looking output from `lsblk`, but it is
> > achievable like it was in the past.  It can become complicated pretty
> > quickly, so it must be very carefully deployed.
> >
> > Here is an example of a process I followed to create this on a test
> > machine that I would probably never end up using in production for
> > various reasons (such as using a loopback device), but displays the
> > steps needed.
> > ...
>
> Ok, so basically I need to nest multiple LVM. Quite doable for simple
> stack, somewhat more difficult (but bearable) for more convoluted ones.
>
> > Would you mind sharing what you like to use this stack for?  It would
> > be great to know how people are leveraging VDO out in the public.
>
> My main use case (testing only, for now) would be to use a single LVMVDO
> as backend for a thinpool with multiple thin volumes and snapshots. In
> previous testbed, the vdo utility provided a simple and effective method
> to create and control the VDO volume. I wondered how to do the same with
> LVMVDO and I skipped the nested-LVM case, but I remember the
> thinpool-over-VDO to be fully supported in previous RHEL.
>
> At the same time the VDO+thinpool combo seems somewhat discouraged on
> the LVM mailing list, mainly on error recovery grounds if/when ENOSP
> hits.
>
> Can I ask what are the current recommendations about VDO and thinlvm?
> Regards.
>
> --
> Danti Gionatan
> Supporto Tecnico
> Assyoma S.r.l. - www.assyoma.it
> email: g.danti at assyoma.it - info at assyoma.it
> GPG public key ID: FF5F32A8
>


So some of the current issues that make us want to lean toward
avoiding a thin-provisioning layer on top of VDO is due to the speed
at which VDO handles discards and the risk of running out of physical
space.  If you're double layering the thin provisioning, you're not
necessarily getting a lot of benefit (except for the fact that you're
getting snapshots), but possibly getting a bunch of risk because now
you have double the chance to run out of "physical" (to that layer)
space.  The VDO layer underneath LVM thin makes for a very precarious
(I believe potentially unrecoverable) situation if it runs out of
physical space.

If you run out of physical space with an lvm thin on top, you could
corrupt things pretty seriously, so monitoring the space utilization
is extremely important.  VDO running out of physical space works
differently than other thin provisioned devices in that you can't
overwrite a block in place.  This is because we can't know for sure
whether that block was a duplicate of another or not.  If it was
already a duplicate, then you don't have anywhere to store the newly
written data, so the VDO device will always allocate a new block
before confirming whether it can write it.  This makes a full physical
condition really difficult to work with if you can't expand the
underlying storage, so you should always make sure to have the ability
to Grow Physical on the VDO volume, even by a little bit (always in
increments of the slab size).

If you are not monitoring the VDO volume sufficiently well and do not
have an acceptable plan to expand reasonably fast in reaction (before
you run out of space), then I would certainly recommend against using
any kind of thin provisioning on top of the VDO volume.

Thanks,
Andy




More information about the vdo-devel mailing list