[fedora-virt] Prevent start of VM that is already running on another host
Gianluca Cecchi
gianluca.cecchi at gmail.com
Fri Sep 4 16:29:23 UTC 2009
My sw env uses F11 x86_64 and qemu/libvirt updated components as provided by
fedora-virt-preview repo.
I seem to notice that in my setup, composed by 2 hosts with several VMs and
shared storage (see below for details), if I have one VM started and try to
start it on the other one, this is not prevented.
To be clear:
- I create a qemu/kvm based VM1 on host1 with virt-manager
at this point host2 knows nothing about VM1
- I successfully live migrate VM1 to host2 (thanks again Mark for the
bugzilla opening and the follow-up for resolution)
at this point both nodes know about VM1 and in virt-manager it is in
"playing" mode in host2, and in greyed-out stopped mode in host1.
- If now I right-click and start VM1 on host1 from inside virt-manager, I
don't get any error... why?
(btw I can open console on both and work in the mean time on both the
instances of the same insisting disks VMs..... who knows what it is
happening at low level...)
In some way host1 should know that in my opinion and refuse to start VM1.
Better, I would like VM1 to not compare at all in host1 virt-manager
section after migration, so that I neither can start it....
>From the hw point of view my setup for VMs storage is based on Drbd 8.3.2 in
primary/primary mode composing a PV that is so seen by both hosts.
So that disk of VM1 is an LV inside a VG of it.
I'm also using RHCS/CLVM as a layer for this.
But I presume I would have the same problem in case of a real SAN with CLVM
managed volumes, correct?
For example on a rhel 5.3 cluster (that doesn't have virtualization at all;
only for comparing) with clvmd on a SAN based PV I can see:
[root at node1 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
LV_ORADATA VG_ORADATA -wi-a-
119.98G
LV_databases VolGroup00 -wi-ao 8.00G
[root at node2 ~]# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
LV_ORADATA VG_ORADATA -wi-ao
119.98G
LV_databases VolGroup00 -wi-ao
8.00G
Here, only node2 has the Oracle data LV open while the second node has
access to the VG and eventually to its modifications (extend vg, add lv,
ecc) in real time.
So that in case of failover is capable to carry on immediately the service.
The same is for my f11 cluster where I have my drbd synced VG that is
vg_qemu01 and my hosts that are virtfed and virtfedbis:
[root at virtfed ~]# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
centos53 vg_qemu01 -wi-ao
6.35G
test_vm_drbd vg_qemu01 -wi-a-
5.00G
w2k3_01 vg_qemu01 -wi-a-
6.35G
lv_root vg_virtfed -wi-ao
12.00G
lv_swap vg_virtfed -wi-ao 4.00G
[root at virtfedbis ~]# lvs
LV VG Attr LSize Origin Snap% Move Log
Copy% Convert
centos53 vg_qemu01 -wi-a-
6.35G
test_vm_drbd vg_qemu01 -wi-ao
5.00G
w2k3_01 vg_qemu01 -wi-a-
6.35G
lv_root vg_virtfed -wi-ao
12.00G
lv_swap vg_virtfed -wi-ao 4.00G
Giving the VMs the same name as their corresponding LV name, now I have
VM centos53 active on both but opened/started on virtfed
VM test_vm_drbd active on both but opened/started on virtfedbis
VM w2k3_01 powered off on both
If I start VM centos53 on virtfedbis after migration, I get success but
actually I corrupt my actual centos53 operating system instance, due to my
intended primary/primary config for drbd
(but the same I would get in real SAN where I do have only ONE volume
indeed).
I know that I can make an active/passive config with drbd in
primary/secondary, but in my opinion there are all the pieces to get the
active/active too.
Also, I would prefere to manage VM transitions form host1 to host2 and
viceversa by virt-manager and not as services of the rhcs (that is an
alternative used by someone).
Thanks for attention,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/fedora-virt/attachments/20090904/860586cf/attachment.htm>
More information about the Fedora-virt
mailing list