[virt-tools-list] [virt-manager PATCH] virtManager: keep polling the connection while waiting for a new VM

Cole Robinson crobinso at redhat.com
Thu Feb 13 16:21:51 UTC 2014


On 02/13/2014 10:46 AM, Giuseppe Scrivano wrote:
> Cole Robinson <crobinso at redhat.com> writes:
> 
>> On 02/13/2014 10:38 AM, Giuseppe Scrivano wrote:
>>> Cole Robinson <crobinso at redhat.com> writes:
>>>
>>>>> diff --git a/virtManager/create.py b/virtManager/create.py
>>>>> index d8e68c3..f8c72e4 100644
>>>>> --- a/virtManager/create.py
>>>>> +++ b/virtManager/create.py
>>>>> @@ -1789,6 +1789,7 @@ class vmmCreate(vmmGObjectUI):
>>>>>          while (guest.uuid not in self.conn.vms) and (count < 100):
>>>>>              count += 1
>>>>>              time.sleep(.1)
>>>>> +            self.conn.schedule_priority_tick(pollvm=True)
>>>>>  
>>>>>          vm = self.conn.get_vm(guest.uuid)
>>>>>          vm.tick()
>>>>>
>>>>
>>>> That will queue quite a lot of API calls, and particularly with the new domain
>>>> events turned on that shouldn't even make a difference that I can see. Is this
>>>> regularly reproducible? If so, how?
>>>
>>> I hit it while creating a new VM for some tests and then it was quite
>>> easy to reproduce while debugging: I had to create manually a bunch of
>>> VMs trough the new VM wizard and it happens very often, I have no exact
>>> numbers but once every 3-4 attempts I would say.
>>>
>>
>> Hmm, I'll try some more installs. Are there any other backtraces in the logs?
>> Also do the logs say 'using domain events' ?
> 
> No, that is the only backtrace I can see here; it is using domain
> events:
> 
> [gio, 13 feb 2014 16:45:06 virt-manager 31303] DEBUG (connection:872) Using domain events
> 

Okay, I reproduced. I think this is fixed in git now. if
schedule_priority_tick(pollvm=True) was called when domain events are enabled,
it tried to just set self.vms = self.vms. But if the idle callback that
actually sets self.vms was getting backed up, it could end up scheduling:

tick(pollvm=True, force=True)
-> register idle handler self.vms = self.vms + new_vm
tick(pollvm=True, force=False)
-> register idle handler self.vms = self.vms

idle1 runs: self.vms = self.vms + new_vm
idle2 runs: self.vms = self.vms (old VM list)

I fixed this by making sure we only overwrite self.vms in the idle callback if
we actually updating self.vms:

commit 3f27bc1bd1412ce7944cb814528a4bde2349638c
Author: Cole Robinson <crobinso at redhat.com>
Date:   Thu Feb 13 11:11:21 2014 -0500

    connection: Fix race when updating conn.vms

    We update the canonical conn.vms list in an idle callback, so any parts
    of the main UI thread won't see the conn.vms change while they are
    iterating over it.

    Problem with this, is that if multiple ticks() are scheduled before
    the first idle handler has a chance to run, we can overwrite the VM
    list can fail to be correctly updated.

    Fix this by only updating 'vms' if it actually changed.

diff --git a/virtManager/connection.py b/virtManager/connection.py
index f508248..d54abe8 100644
--- a/virtManager/connection.py
+++ b/virtManager/connection.py
@@ -1160,11 +1160,16 @@ class vmmConnection(vmmGObject):
             if not self._backend.is_open():
                 return

-            self.vms = vms
-            self.nodedevs = nodedevs
-            self.interfaces = interfaces
-            self.pools = pools
-            self.nets = nets
+            if pollvm:
+                self.vms = vms
+            if pollnet:
+                self.nets = nets
+            if polliface:
+                self.interfaces = interfaces
+            if pollpool:
+                self.pools = pools
+            if pollnodedev:
+                self.nodedevs = nodedevs

             # Make sure device polling is setup
             if not self.netdev_initialized:





More information about the virt-tools-list mailing list