[linux-lvm] Snapshot behavior on classic LVM vs ThinLVM

Xen list at xenhideout.nl
Fri Apr 14 19:20:53 UTC 2017

Gionatan Danti schreef op 14-04-2017 20:59:
> Il 14-04-2017 19:36 Xen ha scritto:
>> The thing is just dismounted apparently; I don't even know what causes 
>> it.
> Maybe running "iotop -a" for some hours can point you to the right 
> direction?
>> The other volumes are thin. I am just very afraid of the thing filling
>> up due to some runaway process or an error on my part.
>> If I have a 30GB volume and a 30GB snapshot of that volume, and if
>> this volume is nearly empty and something starts filling it up, it
>> will do twice the writes to the thin pool. Any damage done is doubled.
>> The only thing that could save you (me) at this point is a process
>> instantly responding to some 90% full message and hoping it'd be in
>> time. Of course I don't have this monitoring in place; everything
>> requires work.
> There is something similar already in place: when pool utilization is
> over 95%, lvmthin *should* try a (lazy) umount. Give a look here:
> https://www.redhat.com/archives/linux-lvm/2016-May/msg00042.html

I even forgot about that. I have such bad memory.

Checking back, the host that I am now on uses LVM 111 (Debian 8). The 
next update is to... 111 ;-).

That was almost a year ago. You were using version 130 back then. I am 
still on 111 on Debian ;-).

Zdenek recommended 142 back then.

I could take it out of testing though. Version 168.

> Monitoring is a great thing; anyway, a safe fail policy would be *very* 
> nice...

A lazy umount does not invalidate any handles by processes for example 
having a directory open.

I believe there was an issue with the remount -o ro call? Taking too 
much resources for the daemon?

Anyway I am very happy that it happens if it happens; the umount.

I just don't feel comfortable about the system at all. I just don't want 
it to crash :p.

More information about the linux-lvm mailing list