[linux-lvm] Reserve space for specific thin logical volumes
list at xenhideout.nl
Wed Sep 13 18:43:03 UTC 2017
Brassow Jonathan schreef op 13-09-2017 1:25:
> I’m the manager of the LVM/DM team here at Red Hat.
Thank you for responding.
> 2) allow users to reserve some space for critical volumes when a
> threshold is reached
> #2 doesn’t seem crazy hard to implement - even in script form.
> - Add a “critical” tag to all thinLVs that are very important:
> # lvchange --addtag critical vg/thinLV
> - Create script that is called by thin_command, it should:
> - check if a threshold is reached (i.e. your reserved space) and if so,
> - report all lvs associated with the thin-pool that are NOT critical:
> # lvs -o name --noheadings --select 'lv_tags!=critical &&
> pool_lv=thin-pool’ vg
> - run <command> on those non-critical volumes, where <command> could
> # fsfreeze <mnt_point>
I think the above is exactly (or almost exactly) in agreement with the
general idea yes.
It uses filesystem tool to achieve it instead of allocation blocking (so
filesystem level, not DM level).
But if it does the same thing that is more important than having
The issue with scripts is that they feel rather vulnerable to
corruption, not being there etc.
So in that case I suppose that you would want some default, shipped
scripts that come with LVM as example for default behaviour and that are
also activated by default?
So fixed location in FHS for those scripts and where user can find then
and can install new ones.
Something similar to /etc/initramfs-tools/ (on Debian), so maybe
/etc/lvm/scripts/ and /usr/share/lvm/scripts/ or similar.
Also easy to adjust by each distribution if they wanted to.
If no one uses critical tag -- nothing happens, but if they do use it,
check unallocated space on critical volumes and sum it up to arrive at
Then not even a threshold value needs to be configured.
> If the above is sufficient, then great. If you’d like to see
> something like this added to the LVM repo, then you can simply reply
> here with ‘yes’ and maybe provide a sentence of what the scenario is
> that it would solve.
Yes. One obvious scenario is root on thin.
It's pretty mandatory for root on thin.
There is something else though.
You cannot set max size for thin snapshots?
This is part of the problem: you cannot calculate in advance what can
happen, because by design, mayhem should not ensue, but what if your
predictions are off?
Being able to set a maximum snapshot size before it gets dropped could
be very nice.
This behaviour is very safe on non-thin.
It is inherently risky on thin.
> (I know there are already some listed in this
> thread, but I’m wondering about those folks that think the script is
> insufficient and believe this should be more standard.)
You really want to be able to set some minimum free space you want per
Suppose I have three volumes of 10GB, 20GB and 3GB.
I may want the 20GB volume to be least important. The 3GB volume most
important. The 10GB volume in between.
I want at least 100MB free on 3GB volume.
When free space on thin pool drops below ~120MB, I want the 20GB volume
and the 10GB volumes to be frozen, no new extents for 30GB volume.
I want at least 500MB free on 10GB volume.
When free space on thin pool drops below ~520MB, I want the 20GB volume
to be frozen, no new extents for 20GB volume.
So I would get 2 thresholds and actions:
- threshold for 3GB volume causing all others to be frozen
- threshold for 10GB volume causing 20GB volume to be frozen
This is easily scriptable and custom thing.
But it would be nice if you could set this threshold in LVM per volume?
So the script can read it out?
100MB of 3GB = 3.3%
500MB of 10GB = 5%
3-5% of mandatory free space could be a good default value.
So the default script could also provide a 'skeleton' for reading the
'critical' tag and then calculating a default % of space that needs to
In this case there is a hierarchy:
3GB > 10GB > 20GB.
Any 'critical volume' could cause all others 'beneath it' to be frozen.
But the most important thing is to freeze or drop snapshots I think.
And to ensure that this is default behaviour?
Or at least provide skeletons for responding to thin threshold values
being reached so that the burden on the administrator is very minimal.
More information about the linux-lvm