[linux-lvm] How to implement live migration of VMs in thinlv after using lvmlockd
yezhiyong at bytedance.com
Wed Nov 2 09:31:50 UTC 2022
Thank you so much for sharing your usage scenario, and I learn a lot
from your experience.
在 11/2/22 2:08 AM, Stuart D Gathman 写道:
> Z> On Tue, Nov 01, 2022 at 01:36:17PM +0800, Zhiyong Ye wrote:
>>> I want to implement live migration of VMs in the lvm + lvmlockd +
>>> environment. There are multiple hosts in the cluster using the same
>>> connection, and the VMs are running on this environment using thinlv
>>> volumes. But if want to live migrate the vm, it will be difficult since
>>> thinlv which from the same thin pool can only be exclusive active on one
> I just expose the LV (thin or not - I prefer not) as an iSCSI target
> that the VM boots from. There is only one host that manages a thin
> pool, and that is a single point of failure, but no locking issues. You
> issue the LVM commands on the iSCSI server (which I guess they call NAS
> these days).
> If you need a way for a VM to request enlarging an LV it accesses, or
> similar interaction, I would make a simple API where each VM gets a
> token that determines what LVs it has access to and how much total
> storage it can consume. Maybe someone has already done that.
> I just issue the commands on the LVM/NAS/iSCSI host.
> I haven't done this, but there can be more than one thin pool, each on
> it's own NAS/iSCSI server. So if one storage server crashes, then
> only the VMs attached to it crash. You can only (simply) migrate a VM
> to another VM host on the same storage server.
> BUT, you can migrate a VM to another host less instantly using DRBD
> or other remote mirroring driver. I have done this. You get the
> remote LV mirror mostly synced, suspend the VM (to a file if you need
> to rsync that to the remote), finish the sync of the LV(s), resume the
> VM on the new server - in another city. Handy when you have a few hours
> notice of a natural disaster (hurricane/flood).
More information about the linux-lvm