[Qemu-devel] Re: [libvirt] Re: [PATCH 2/3] Introduce monitor 'wait' command

Jamie Lokier jamie at shareable.org
Thu Apr 9 17:12:17 UTC 2009


Anthony Liguori wrote:
> Paul Brook wrote:
> >No you don't. If you use event flags rather than discrete events then you 
> >don't need to buffer at all. You just need to be able to store the state 
> >of each type of event you're going to raise, which should be a bounded set.
> >
> >This has its own set of issues - typically race conditions or "lost" 
> >events if the client (libvirt) code isn't written carefully, and means you 
> >can't attach information with an event, only indicate that something 
> >happened.
> >However if the correct model is used (event driven polling rather than 
> >purely event driven) this shouldn't be problem.
> 
> It's just deferring the problem.  Consider the case of VNC user 
> authentication.  You want to have events associated with whenever a user 
> connects and disconnects so you can keep track of who's been on a 
> virtual machine for security purposes.
>
> In my model, you record the last 10 minutes worth of events.  If a user 
> aggressively connects/reconnects, you could consume a huge amount of 
> memory.  You could further limit it by recording only a finite number of 
> events to combat that problem.

It's not deferring the problem - it's mixing two different, slightly
incompatible problems, and that means bugs.

One is monitoring state of QEMU.  (E.g. is the VM running, has it
stopped due to ENOSPC, has the watchdog triggered, what's the current
list of attached VNC and monitor clients, how's that migration going).
That's a good use for event-driven polling, because that won't break
no matter if a monitoring app goes to sleep for 15 minutes,
disconnects and reconnects, etc. etc.

The other problem is logging discrete events.  For that you do need to
expire things, keep a maximum number of events, probably timestamp
events, and not merge equivalent events together.

But that is unreliable for event-driven state monitoring.  What
happens if there are 1000 VNC connect/disconnect pairs in rapid
succession (think client bug or open port facing the net)?  If the
event log has a limit of 1000 stored events, it will throw away some
events before a monitoring app sees them.  That app then fails to
notice that the VM stopped due to ENOSPC, because that event was
thrown away before the monitoring app could read it.

Linux has something analogous to these two: normal signals and
real-time queued signals.  Normal signals are fixed-size state, and
they are never lost if used properly.  Real-time queued signals carry
information, such as about some I/O or timer which has completed, but
there's a problem which is the limited queue size.  It's important to
avoid losing data when the queue is full, because apps depend on
deducing state from the queued events, so there's a special "queue
full" signal always sent when the real-time signal queue is full.
This tells apps that detailed queued data has been lost, and they need
to poll everything to check the state of everything.

To support state-monitoring apps (e.g. one which says if the VM is
still running :-), you mustn't discard the "VM has stopped" events no
matter how many times that or other events are sent.  But you can
merge them into a single pending state.

To support looking at recent VNC connections, you do need a limit on
the number of stored events, discard when the limit is reached, and
not to merge them.

One way to make both of these work is to have a "some events have been
discarded here" event.  At least state-monitoring apps can see that
means they should poll all the state again.  It's not ideal though, if
that happens often.

-- Jamie




More information about the libvir-list mailing list