[linux-lvm] [lvmlockd] recovery lvmlockd after kill_vg
damon.devops at gmail.com
Tue Sep 25 10:18:53 UTC 2018
AFAIK once sanlock can not access lease storage, it will run
"kill_vg" to lvmlockd, and the standard process should be deactivate
logical volumes and drop vg locks.
But sometimes the storage will recovery after kill_vg(and before we
deactivate or drop lock), and then it will prints "storage failed for
sanlock leases" on lvm commands like this:
[root at dev1-2 ~]# vgck 71b1110c97bd48aaa25366e2dc11f65f
WARNING: Not using lvmetad because config setting use_lvmetad=0.
WARNING: To avoid corruption, rescan devices to make changes visible
VG 71b1110c97bd48aaa25366e2dc11f65f lock skipped: storage failed for
Reading VG 71b1110c97bd48aaa25366e2dc11f65f without a lock.
so what should I do to recovery this, (better) without affect
volumes in using?
I find a way but it seems very tricky: save "lvmlockctl -i" output,
run lvmlockctl -r vg and then activate volumes as the previous output.
Do we have an "official" way to handle this? Since it is pretty
common that when I find lvmlockd failed, the storage has already
More information about the linux-lvm