[linux-lvm] Snapshot behavior on classic LVM vs ThinLVM
zkabelac at redhat.com
Sat Apr 22 21:22:03 UTC 2017
Dne 22.4.2017 v 09:14 Gionatan Danti napsal(a):
> Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
>> However there are many different solutions for different problems -
>> and with current script execution - user may build his own solution -
>> i.e. call
>> 'dmsetup remove -f' for running thin volumes - so all instances get
>> 'error' device when pool is above some threshold setting (just like
>> old 'snapshot' invalidation worked) - this way user will just kill
>> thin volume user task, but will still keep thin-pool usable for easy
> This is a very good idea - I tried it and it indeed works.
> However, it is not very clear to me what is the best method to monitor the
> allocated space and trigger an appropriate user script (I understand that
> versione > .169 has %checkpoint scripts, but current RHEL 7.3 is on .166).
> I had the following ideas:
> 1) monitor the syslog for the "WARNING pool is dd.dd% full" message;
> 2) set a higher than 0 low_water_mark and cache the dmesg/syslog
> "out-of-data" message;
> 3) register with device mapper to be notified.
> What do you think is the better approach? If trying to register with device
> mapper, how can I accomplish that?
> One more thing: from device-mapper docs (and indeed as observerd in my tests),
> the "pool is dd.dd% full" message is raised one single time: if a message is
> raised, the pool is emptied and refilled, no new messages are generated. The
> only method I found to let the system re-generate the message is to
> deactiveate and reactivate the thin pool itself.
ATM there is even bug for 169 & 170 - dmeventd should generate message
at 80,85,90,95,100 - but it does it only once - will be fixed soon...
>> ~16G so you can't even extend it, simply because it's
>> unsupported to use any bigger size
> Just out of curiosity, in such a case, how to proceed further to regain access
> to data?
> And now the most burning question ... ;)
> Given that thin-pool is under monitor and never allowed to fill data/metadata
> space, as do you consider its overall stability vs classical thick LVM?
Not seen metadata error for quite long time...
Since all the updates are CRC32 protected it's quite solid.
More information about the linux-lvm