[libvirt PATCH 0/3] Invalidate the cpu flags cache on changes of kernel command line

Daniel P. Berrangé berrange at redhat.com
Wed Aug 11 13:42:06 UTC 2021


On Wed, Aug 11, 2021 at 09:33:08AM -0400, Eduardo Habkost wrote:
> On Wed, Aug 11, 2021 at 4:43 AM Jiri Denemark <jdenemar at redhat.com> wrote:
> >
> > On Fri, Aug 06, 2021 at 18:12:21 +0100, Daniel P. Berrangé wrote:
> > > On Fri, Aug 06, 2021 at 05:07:45PM +0200, Jiri Denemark wrote:
> > > > On Thu, Aug 05, 2021 at 14:50:51 +0100, Daniel P. Berrangé wrote:
> > > > > On Thu, Aug 05, 2021 at 03:36:37PM +0200, Tim Wiederhake wrote:
> > > > > > The kernel command line can contain settings affecting the availability
> > > > > > of cpu features, eg. "tsx=on". This series adds the kernel command line
> > > > > > to the cpu flags cache and declares the cache invalid if the current
> > > > > > kernel command line differs.
> > > > >
> > > > > Multiple things can change the CPU features. kernel version,
> > > > > microcode version, bios settings change, kernel command line. We've
> > > > > been playing whack-a-mole in cache invalidation for ages adding ever
> > > > > more criteria for things which have side effects on CPU features
> > > > > available.
> > > > >
> > > > > Running the CPUID instruction is cheap. Could we directly query the
> > > > > set of host CPUID leaves we care about, and compare that, and
> > > > > potentially even get rid of some of the other checks we have ?
> > > >
> > > > I guess it could help in some cases, but we wouldn't be able to drop
> > > > some of the existing checks anyway. Because the settings usually do not
> > > > result in the CPU dropping a particular bit from CPUID, the feature just
> > > > becomes unusable by reporting a failure when used. So the settings would
> > > > only be reflected in what features QEMU can enable on the host. Although
> > > > checking CPUID might be enough for TSX, checking the command line is
> > > > helpful in other cases.
> > >
> > > Would that be reflected by the answer to KVM_GET_SUPPORTED_CPUID
> > > which is the intersection of physical CPUID and what KVM is actally
> > > willing to enable ?  That ioctl would be cheap too.
> >
> > I don't know, to be honest. I guess it should work unless QEMU does some
> > additional processing/filtering of the results it gets from KVM.
> >
> > Eduardo, do you know if KVM_GET_SUPPORTED_CPUID would be sufficient to
> > check any configuration changes (bios settings, kernel command line,
> > module options, ...) that affect usable CPU features?
> 
> GET_SUPPORTED_CPUID is supposed to be enough to cover all kernel-side
> factors (including host CPUID flags, kernel and module options, bios
> settings). However, I would call KVM_GET_MSRS (the system ioctl, not
> the VCPU ioctl) to be extra safe. Some features are available only if
> KVM supports MSRs required for them, and some features are exposed to
> guests through bits in some MSRs.
> 
> That would mean KVM_GET_SUPPORTED_CPUID + KVM_GET_MSRS + QEMU binary
> identifier (hash or date/time?) should be sufficient today. The
> problem here is the word "today": we never know what kind of extra KVM
> capabilities new features might require.
> 
> Wouldn't it be easier to simply invalidate the cache every time
> libvirtd is restarted? If libvirtd keeps /dev/kvm open all the time,
> this would also cover features affected by KVM module reloads.

Invalidating the cache on every restart defeats the very purpose of
having a cache in the first place. Probing for capabilities slows
startup of the daemon and that is what required introduction of a
cache.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvir-list mailing list