[libvirt] [PATCH 4/5] Introcude VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT

Michal Privoznik mprivozn at redhat.com
Fri Jun 15 07:19:44 UTC 2018


On 06/14/2018 05:35 PM, Daniel P. Berrangé wrote:
> On Thu, Jun 14, 2018 at 11:07:43AM +0200, Michal Privoznik wrote:
>> On 06/13/2018 05:34 PM, John Ferlan wrote:
>>>
>>> $SUBJ: "Introduce" and "NO_WAIT"
>>>
>>>
>>> On 06/07/2018 07:59 AM, Michal Privoznik wrote:
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1552092
>>>>
>>>> If there's a long running job it might cause us to wait 30
>>>> seconds before we give up acquiring job. This may cause trouble
>>>
>>> s/job/the job/
>>>
>>> s/may cause trouble/is problematic/
>>>
>>>> to interactive applications that fetch stats repeatedly every few
>>>> seconds.
>>>>
>>>> Solution is to introduce
>>>
>>> The solution is...
>>>
>>>> VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT flag which tries to
>>>> acquire job but does not wait if acquiring failed.
>>>>
>>>> Signed-off-by: Michal Privoznik <mprivozn at redhat.com>
>>>> ---
>>>>  include/libvirt/libvirt-domain.h |  1 +
>>>>  src/libvirt-domain.c             | 10 ++++++++++
>>>>  src/qemu/qemu_driver.c           | 15 ++++++++++++---
>>>>  3 files changed, 23 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
>>>> index da773b76cb..1a1d34620d 100644
>>>> --- a/include/libvirt/libvirt-domain.h
>>>> +++ b/include/libvirt/libvirt-domain.h
>>>> @@ -2055,6 +2055,7 @@ typedef enum {
>>>>      VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF = VIR_CONNECT_LIST_DOMAINS_SHUTOFF,
>>>>      VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER = VIR_CONNECT_LIST_DOMAINS_OTHER,
>>>>  
>>>> +    VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT = 1 << 29, /* ignore stalled domains */
>>>
>>> "stalled"?  How about "don't wait on other jobs"
>>
>> Well, my hidden idea was also that we can "misuse" this flag to not wait
>> on other places too. For instance, if we find out (somehow) that a
>> domain is in D state, we would consider it stale even without any job
>> running on it. Okay, we have no way of detecting if qemu is in D state
>> right now, but you get my point. If we don't lock this flag down to just
>> domain jobs (that not all drivers have btw), we can use it more widely.
> 
> I would suggest we call it  "NOWAIT" with explanation that we will only
> report statistics that can be obtained immediately without any blocking,
> whatever may be the cause.

Okay, works for me. I'll post v2 shortly.

> 
> 
> On a tangent, I think this problem really calls for a significantly
> different design approach, medium term.
> 
> The bulk stats query APIs were a good step forward on what we had
> before where users must call many libvirt APIs, but it is still
> not very scalable. With huge numbers of guests, we're still having
> to serialize stats query calls into 1000's of QEMU processes.
> 
> I think we must work with QEMU to define a better interface, taking
> advantage of fact we're colocated on the same host. ie we tell QEMU
> we we want stats exported in memory page <blah>, and QEMU will keep
> that updated at all times.
> 
> When libvirt needs the info it can then just read it straight out of
> the shared memory page, no blocking on any jobs, no QMP serialization,
> etc.
> 
> For that matter, we can do a similar thing in libvirt API too. We can
> export a shared memory region for applications to use, which we keep
> updated on some regular interval that app requests. They can then always
> access updated stats without calling libvirt APIs at all.

This is clever idea. But qemu is not the only source of stats we gather.
We also fetch some data from CGroups, /proc, ovs bridge, etc. So libvirt
would need to add its own stats for client to see. This means there will
be a function that updates the shared mem every so often (as client
tells us via some new API?). The same goes for qemu impl. Now imagine
two clients wanting two different refresh rates with GCD 1 :-)

Michal




More information about the libvir-list mailing list