[Ovirt-devel] Migration issues

Ian Main imain at redhat.com
Fri Feb 13 18:35:13 UTC 2009


On Thu, 12 Feb 2009 12:04:21 -0600
"Carb, Brian A" <Brian.Carb at unisys.com> wrote:

> Running ovirt 0.96, 4 host nodes available, vm installed using the appliance's /ovirtnfs/disk1
> 
> If I have a VM running on node1 and I select "migrate" to node2, the task is queued but migration never happens and the taskomatic log shows "libvir: Remote error : socket closed unexpectedly"
> 
> When I login to node2 and examine the message in /var/log/libvirt/qemu/vm.log, the log shows that qemu is trying to access the vm's disk as /mnt/xxxxxx (some generated tmp directory name) but no mountpoint with that name exists - there appears to be a mountpoint with a different randomly generated name. If I manually create a mountpoint with the correct name (and mount the /ovirtnfs there), then migration works.
> 
> Also, the correct hostname on which the VM is running (after migration) is not correctly updated for the VM in the ovirt dashboard.
> 
> Are these known issues or am i doing something wrong?

Hrrm, last I tested it was working.. what do you have for disk configuration in your VM?

The newer taskomatic that uses qpid has much better error messages.  Unfortunately with that version you will have to log into each node and run libvirt with debugging enabled to see what is actually going wrong.  I actually can't remember if that's how storage behaves normally or not, and I don't have a working configuration right now to test.

Sorry for the delay in responding btw, we're all madly trying to get a new release out and get the new installer working etc.  I'm hoping to do some more serious testing in these areas for the release which should be coming soon.  

    Ian




More information about the ovirt-devel mailing list