[linux-lvm] Snapshot behavior on classic LVM vs ThinLVM

Xen list at xenhideout.nl
Sat Apr 15 21:48:32 UTC 2017


Gionatan Danti schreef op 14-04-2017 20:59:

> There is something similar already in place: when pool utilization is
> over 95%, lvmthin *should* try a (lazy) umount. Give a look here:
> https://www.redhat.com/archives/linux-lvm/2016-May/msg00042.html
> 
> Monitoring is a great thing; anyway, a safe fail policy would be *very* 
> nice...

This is the idea I had back then:

- reserve space for calamities.

- when running out of space, start informing the filesystem(s).

- communicate individual unusable blocks or simple a number of 
unavailable blocks through some inter-layer communication system.

But it was said such channels do not exist or that the concept of a 
block device (a logical addressing space) suddenly having trouble 
delivering the blocks would be a conflicting concept.

If the concept of a filesystem needing to deal with disappearing space 
were to be made live,

what you would get was.

that there starts to grow some hidden block of unusable space.

Supposing that you have 3 volumes of sizes X Y and Z.

With the constraint that currently individually each volume is capable 
of using all space it wants,

now volume X starts to use up more space and the available remaining 
space is no longer enough for Z.

The space available to all volumes is equivalent and is only constrained 
by their own virtual sizes.

So saying that for each volume the available space = min( own filesystem 
space, available thin space )

any consumption by any of the other volumes will see a reduction of the 
available space by the same amount for the other volumes.

For the using volume this is to be expected, for the other volumes this 
is strange.

each consumption turns into a lessening for all the other volumes 
including the own.

this reduction of space is therefore a single number that pertains to 
all volumes and only comes to be in any kind of effect if the real 
available space is less than the (filesystem oriented, but rather LVM 
determined) virtual space the volume thought it had.

for all volumes that are effected, there is now a discrepancy between 
virtual available space and real available space.

this differs per volume but is really just a substraction. However LVM 
should be able to know about this number since it is just about a number 
of extents available and 'needed'.

Zdenek said that this information is not available in a live fashion 
because the algorithms that find a new free extent need to go look for 
it first.

Regardless if this information was available it could be communicated to 
the logical volume who could communicate it to the filesystem.

There are two ways: polling a number through some block device command 
or telling the filesystem through a daemon.

Remounting the filesystem read-only is one such "through a daemon" 
command.

Zdenek said that dmevent plugins cannot issue remount request because 
the system call is too big.

But it would be important that filesystem has feature for dealing with 
unavailable space for example by forcing it to reserve a certain amount 
of space in a live or dynamic fashion.




More information about the linux-lvm mailing list