[linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

Zhiyong Ye yezhiyong at bytedance.com
Wed Nov 2 09:01:59 UTC 2022

在 11/2/22 1:57 AM, David Teigland 写道:
> On Wed, Nov 02, 2022 at 01:02:27AM +0800, Zhiyong Ye wrote:
>> Hi Dave,
>> Thank you for your reply!
>> Does this mean that there is no way to live migrate VMs when using lvmlockd?
> You could by using linear LVs, ovirt does this using sanlock directly,
> since lvmlockd arrived later.

Yes, standard LV is theoretically capable of live migration because it 
supports multiple hosts using the same LV concurrently with a shared 
lock (lvchange -asy). But I want to support the live migration feature 
for both LVs (thin LV and standard LV).

>> As you describe, the granularity of thinlv's sharing/unsharing is per
>> read/write IO, except that lvmlockd reinforces this limitation for the lvm
>> activation command.
>> Is it possible to modify the code of lvmlockd to break this limitation and
>> let libvirt/qemu guarantee the mutual exclusivity of each read/write IO
>> across hosts when live migration?
> lvmlockd locking does not apply to the dm i/o layers.  The kind of
> multi-host locking that you seem to be talking about would need to be
> implemented inside dm-thin to protect on-disk data structures that it
> modifies.  In reality you would need to write a new dm target with locking
> and data structures designed for that kind of sharing.

I can try to write a new dm thin target or make some modifications based 
on the existing dm-thin target to support this feature, if it is 
technically feasible. But I'm curious why the current dm-thin doesn't 
support multi-host shared access, just like dm-linear does.



More information about the linux-lvm mailing list