[linux-lvm] [lvmlockd] Refresh lvmlockd leases after sanlock changes

Damon Wang damon.devops at gmail.com
Wed Mar 7 05:50:24 UTC 2018


Hi Dave,

Thank you for your reply!

>
thin lv's from the same thin pool cannot be used from different hosts
> concurrently.  It's not because of lvm metadata, it's because of the way
> dm-thin manages blocks that are shared between thin lvs.  This block
> sharing/unsharing occurs as each read/write happens on the block device,
> not on LV activation or any lvm command.
>

My plan is one vm has one thin lv as root volume, and each thin lv get its
own
thin lv pool, Is this a way to avoid the problem of block share within thin
lv pool?

I suggest trying https://ovirt.org


I did some research on ovirt, there are two designs now
(https://github.com/oVirt/vdsm/blob/master/doc/thin-provisioning.md)
and I found it really relies on SPM host, once SPM host fails, all vms'
availability
will be influenced, which is we don't want to see.


> You need to release the lock on the source host after the vm is suspended,
> and acquire the lock on the destination host before the vm is resumed.
> There are hooks in libvirt to do this.  The LV shouldn't be active on both
> hosts at once.
>

 I did some experiments on this since I have read the libvirt migrate hook
page
(https://libvirt.org/hooks.html#qemu_migration) and it seems useless.
I wrote a simple script and confirm that the hook execute process is:

   1. on dest host, do "migrate begin", "prepare begin", "start begin",
   "started begin"
   2. after a while (usually a few seconds), on source host, do "stopped
   end" and
   "release end"

In a word, it not provide a way to do some thing on the time of vm suspend
and resume.🙁

Thanks!

Damon
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20180307/3678180d/attachment.htm>


More information about the linux-lvm mailing list