[linux-lvm] Reserve space for specific thin logical volumes

Xen list at xenhideout.nl
Thu Sep 21 14:49:38 UTC 2017


Instead of responding here individually I just sought to clarify in my 
other email that I did not intend to mean by "kernel feature" any form 
of max snapshot constraint mechanism.

At least nothing that would depend on size of snapshots.


Zdenek Kabelac schreef op 21-09-2017 15:02:

> And you also have project that do try to integrate shared goals like 
> btrfs.

Without using disjunct components.

So they solve human problem (coordination) with technical solution (no 
more component-based design).

> We hope community will provide some individual scripts...
> Not a big deal to integrate them into repo dir...

We were trying to identify common cases so that LVM team can write those 
skeletons for us.

> It's mostly about what can be supported 'globally'
> and what is rather 'individual' customization.

There are people going to be interested in common solution even if it's 
not everyone all at the same time.

> Which can't be deliver with current thinp technology.
> It's simply too computational invasive for our targeted performance.

You misunderstood my intent.

> I assume you possibly missed this logic of thin-p:

> So when you 'write' to ORIGIN - your snapshot which becomes bigger in
> terms of individual/exclusively owned chunks - so if you have i.e.
> configured snapshot to not  consume more then  XX% of your pool - you
> would simply need to recalc this with every update on shared
> chunks....

I knew this. But we do not depend for the calculations on CONSUMED SPACE 
(and its character/distribution) but only on FREE SPACE.

> And as has been already said - this is currently unsupportable 'online'

And unnecessary for the idea I was proposing.

Look, I am just trying to get the idea across correctly.

> Another aspect here is - thin-pool has  no idea about 'history' of
> volume creation - it doesn't not know  there is volume X being
> snapshot of volume Y - this all is only 'remembered' by lvm2 metadata
> -  in kernel - it's always like  -  volume X  owns set of chunks  1...
> That's all kernel needs to know for a single thin volume to work.

I know this.

However you would need LVM2 to make sure that only origin volumes are 
marked as critical.

> Unsupportable in 'kernel' without rewrite and you can i.e.
> 'workaround' this by placing 'error' targets in place of less
> important thinLVs...

I actually think that if I knew how to do multithreading in the kernel, 
I could have the solution in place in a day...

If I were in the position to do any such work to begin with... :(.

But you are correct that error target is almost the same thing.

> Imagine you would get pretty random 'denials' of your WRITE request
> depending on interaction with other snapshots....

All non-critical volumes would get write requests denied, including 
snapshots (even read-only ones).

> Surely if use 'read-only' snapshot you may not see all related
> problems, but such a very minor subclass of whole provisioning
> solution is not worth a special handling of whole thin-p target.

Read-only snapshots would also die en masse ;-).


> You are not 'reserving' any space as the space already IS assigned to
> those inactive volumes.

Space consumed by inactive volumes is calculated into FREE EXTENTS for 
the ENTIRE POOL.

We need no other data for the above solution.

> What you would have to implement is to TAKE the space FROM them to
> satisfy writing task to your 'active' volume and respect
> prioritization...

Not necessary. Reserved space is a metric, not a real thing.

Reserved space by definition is a part of unallocated space.

> If you will not implement this 'active' chunk 'stealing' - you are
> really ONLY shifting 'hit-the-wall' time-frame....  (worth possibly
> couple seconds only of your system load)...

Irrelevant. Of course we are employing a measure at 95% full that will 
be like error targets replacing all non-critical volumes.

Of course if total mayhem ensues we will still be in trouble.

The idea is that if this total mayhem originates from non-critical 
volumes, the critical ones will be unaffected (apart from their 
snapshots).

You could flag snapshots of critical volumes also as critical and then 
not reserve any space for them so you would have a combined space 
reservation.

Then snapshots for critical volumes would live longer.

Again, no consumption metric required. Only free space metrics.

> In other words - tuning 'thresholds' in userspace's 'bash' script will
> give you very same effect as if you are focusing here on very complex
> 'kernel' solution.

It's just not very complex.

You thought I wanted space consumption metric for all volumes including 
snapshots and then invididual attribution of all consumed space.

Not necessary.

The only thing I proposed used negative space (free space).




More information about the linux-lvm mailing list