[linux-lvm] Reserve space for specific thin logical volumes

Zdenek Kabelac zkabelac at redhat.com
Wed Sep 13 08:33:53 UTC 2017


D     # fsfreeze <mnt_point>
>>
>> The above should have the result you want - essentially locking out
>> all non-critical file systems.  The admin can easily turn them back on
>> via fsfreeze one-by-one as they resolve the critical lack of space.
>> If you find this too heavy-handed, perhaps try something else for
>> <command> instead first.
> 
> Very good suggestion. Actually, fsfreeze should works without too much drama.
>

Think about this case:

original volume with number of timely taken snapshots.

If you ONLY use 'read-only' snaps - there is not much thing to do - writing to 
the origin gives you quite 'precise' estimation how much data are in progress.
(seeing amount of dirty-pages....)

However when all other snapshots  (i.e. VM machines) are in-use and also do 
have writable data in progress -  invoking 'fsfreeze'  operation has 
unpredictable amount of provisioning in front of you (all your dirty pages 
needs to be first committed on your disk)...

So you can easily 'freeze' yourself in 'fsfreeze'.

lvm2 has got over last year much smarter - and avoids i.e. flushing in case 
it's queering of used 'data-space'  with 2 consequences:

a) it prevents 'dead-lock' in suspending with flushing (and holding lvm2 VG 
locking - which was really bad bad bad problem....  as you could not run 
'lvextend' for thin-pool in such case to rescue situation (i.e. you still have 
free space in VG - or even extend your VG...

b) gives you some 'historical/unprecise/async' runtime data of thin-pool fullness

So you can start to see that doing some 'perfect'  decision with historical 
data is not easy task...


Reagards


Zdenek




More information about the linux-lvm mailing list