qemu modularization of qemu-5.1 vs libvirt domcapabilities cache?

Daniel P. Berrangé berrange at redhat.com
Thu Aug 20 10:35:19 UTC 2020


On Thu, Aug 20, 2020 at 12:31:15PM +0200, Martin Wilck wrote:
> On Thu, 2020-08-20 at 10:57 +0100, Daniel P. Berrangé wrote:
> > On Thu, Aug 20, 2020 at 11:32:03AM +0200, Martin Wilck wrote:
> > > On Tue, 2020-08-18 at 15:15 -0600, Jim Fehlig wrote:
> > > > On 8/5/20 2:19 AM, Andrea Bolognani wrote:
> > > > > I guess we need to start checking the modules directory in
> > > > > addition
> > > > > to the main QEMU binary, and regenerate capabilities every time
> > > > > its
> > > > > contents change.
> > > > 
> > > > We recently received reports of this issue on Tumbleweed, which
> > > > just
> > > > got the 
> > > > modularized qemu 5.1
> > > > 
> > > > https://bugzilla.opensuse.org/show_bug.cgi?id=1175320
> > > > 
> > > > Mark, are you working on a patch to invalidate the cache on
> > > > changes
> > > > to the qemu 
> > > > modules directory? I suppose it needs to be handled similar to
> > > > the
> > > > qemu 
> > > > binaries. E.g. when building the cache include a list of all qemu
> > > > modules found. 
> > > > When validating the cache see if any modules have disappeared, if
> > > > any
> > > > have been 
> > > > added, and if the ctime of any have changed. Yikes, sounds a
> > > > little
> > > > more complex 
> > > > than the binaries :-).
> > > 
> > > I'd like to question whether this effort is justified for an
> > > optimization that matters only at libvirtd startup time, and even
> > > there
> > > saves no more than a few seconds.
> > > 
> > > I propose to simply disable the caching of qemu capabilities (or
> > > provide a build-time option to do so). Optimizations that produce
> > > wrong
> > > results should be avoided.
> > 
> > Whether the time matters depends on your use case for QEMU. For heavy
> > data center apps like OpenStack you won't notice it because OpenStack
> > itself adds soo much overhead to the system.  For cases where the VM
> > is used as "embedded" infrastructure startup time can be critical.
> > Not caching capabilities adds easily 300-500 ms to startup of a
> > single
> > VM, which is very significant when the current minimum startup time
> > of
> > a VM can be as low as 150 ms.
> > 
> > IOW removing caching is not a viable option.
> 
> Capability caching could be turned into a build-time option, optimized
> for the target audience.

It is rare at build time to know what your target audience's apps are
going to be doing when you are a OS distro. So it isn't a decision that
can usefully be made at build time.

> Or we could enable caching in general, but always rescan capabilites at
> libvirtd startup. That way startup of VMs wouldn't be slowed down. No?

Scanning at libvirtd startup is something we work very hard to avoid.
When you have 20 QEMU system emulators installed, it makes libvirtd
startup incredibly slow which is a big problem when we are using
auto-start + auto-shutdown for libvirtd.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvir-list mailing list