[linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd

Stuart D Gathman stuart at gathman.org
Tue Nov 1 18:08:21 UTC 2022

Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
>> I want to implement live migration of VMs in the lvm + lvmlockd + sanlock
>> environment. There are multiple hosts in the cluster using the same iscsi
>> connection, and the VMs are running on this environment using thinlv
>> volumes. But if want to live migrate the vm, it will be difficult since
>> thinlv which from the same thin pool can only be exclusive active on one
>> host.

I just expose the LV (thin or not - I prefer not) as an iSCSI target
that the VM boots from.  There is only one host that manages a thin pool, 
and that is a single point of failure, but no locking issues.  You
issue the LVM commands on the iSCSI server (which I guess they call NAS
these days).

If you need a way for a VM to request enlarging an LV it accesses, or
similar interaction, I would make a simple API where each VM gets a
token that determines what LVs it has access to and how much total
storage it can consume.  Maybe someone has already done that.
I just issue the commands on the LVM/NAS/iSCSI host.

I haven't done this, but there can be more than one thin pool, each on
it's own NAS/iSCSI server.  So if one storage server crashes, then
only the VMs attached to it crash.  You can only (simply) migrate a VM 
to another VM host on the same storage server.

BUT, you can migrate a VM to another host less instantly using DRBD
or other remote mirroring driver.  I have done this.  You get the
remote LV mirror mostly synced, suspend the VM (to a file if you need
to rsync that to the remote), finish the sync of the LV(s), resume the
VM on the new server - in another city.  Handy when you have a few hours
notice of a natural disaster (hurricane/flood).

More information about the linux-lvm mailing list