My sw env uses F11 x86_64 and qemu/libvirt updated components as provided by fedora-virt-preview repo.<br>I seem to notice that in my setup, composed by 2 hosts with several VMs and shared storage (see below for details), if I have one VM started and try to start it on the other one, this is not prevented.<br>
To be clear:<br>- I create a qemu/kvm based VM1 on host1 with virt-manager<br>at this point host2 knows nothing about VM1<br>- I successfully live migrate VM1 to host2 (thanks again Mark for the bugzilla opening and the follow-up for resolution)<br>
at this point both nodes know about VM1 and in virt-manager it is in "playing" mode in host2, and in greyed-out stopped mode in host1.<br>- If now I right-click and start VM1 on host1 from inside virt-manager, I don't get any error... why?<br>
(btw I can open console on both and work in the mean time on both the instances of the same insisting disks VMs..... who knows what it is happening at low level...)<br><br>In some way host1 should know that in my opinion and refuse to start VM1.<br>
Better, I would like VM1 to not compare at all in host1 virt-manager section after migration, so that I neither can start it....<br><br>From the hw point of view my setup for VMs storage is based on Drbd 8.3.2 in primary/primary mode composing a PV that is so seen by both hosts.<br>
So that disk of VM1 is an LV inside a VG of it.<br>I'm also using RHCS/CLVM as a layer for this.<br><br>But I presume I would have the same problem in case of a real SAN with CLVM managed volumes, correct?<br>For example on a rhel 5.3 cluster (that doesn't have virtualization at all; only for comparing) with clvmd on a SAN based PV I can see:<br>
[root@node1 ~]# lvs<br> LV VG Attr LSize Origin Snap% Move Log Copy% Convert <br> LV_ORADATA VG_ORADATA -wi-a- 119.98G <br>
LV_databases VolGroup00 -wi-ao 8.00G <br> <br>[root@node2 ~]# lvs<br> LV VG Attr LSize Origin Snap% Move Log Copy% Convert <br>
LV_ORADATA VG_ORADATA -wi-ao 119.98G <br> LV_databases VolGroup00 -wi-ao 8.00G <br><br>Here, only node2 has the Oracle data LV open while the second node has access to the VG and eventually to its modifications (extend vg, add lv, ecc) in real time.<br>
So that in case of failover is capable to carry on immediately the service.<br><br>The same is for my f11 cluster where I have my drbd synced VG that is vg_qemu01 and my hosts that are virtfed and virtfedbis:<br>[root@virtfed ~]# lvs<br>
LV VG Attr LSize Origin Snap% Move Log Copy% Convert<br> centos53 vg_qemu01 -wi-ao 6.35G <br>
test_vm_drbd vg_qemu01 -wi-a- 5.00G <br> w2k3_01 vg_qemu01 -wi-a- 6.35G <br> lv_root vg_virtfed -wi-ao 12.00G <br>
lv_swap vg_virtfed -wi-ao 4.00G <br><br>[root@virtfedbis ~]# lvs<br> LV VG Attr LSize Origin Snap% Move Log Copy% Convert<br> centos53 vg_qemu01 -wi-a- 6.35G <br>
test_vm_drbd vg_qemu01 -wi-ao 5.00G <br> w2k3_01 vg_qemu01 -wi-a- 6.35G <br> lv_root vg_virtfed -wi-ao 12.00G <br>
lv_swap vg_virtfed -wi-ao 4.00G <br><br>Giving the VMs the same name as their corresponding LV name, now I have<br>VM centos53 active on both but opened/started on virtfed<br>VM test_vm_drbd active on both but opened/started on virtfedbis<br>
VM w2k3_01 powered off on both<br><br>If I start VM centos53 on virtfedbis after migration, I get success but actually I corrupt my actual centos53 operating system instance, due to my intended primary/primary config for drbd<br>
(but the same I would get in real SAN where I do have only ONE volume indeed).<br>I know that I can make an active/passive config with drbd in primary/secondary, but in my opinion there are all the pieces to get the active/active too.<br>
Also, I would prefere to manage VM transitions form host1 to host2 and viceversa by virt-manager and not as services of the rhcs (that is an alternative used by someone).<br><br>Thanks for attention,<br>Gianluca<br>