[libvirt] Enabling x2apic by default in KvM (was Re: [Qemu-devel] [PATCH] target-i386: enable x2apic by default on more recent CPU models)

Eduardo Habkost ehabkost at redhat.com
Thu Feb 6 14:33:39 UTC 2014


On Tue, Feb 04, 2014 at 03:12:43PM +0100, Andreas Färber wrote:
> Am 03.02.2014 20:01, schrieb Eduardo Habkost:
> > On Tue, Jan 21, 2014 at 05:13:50PM +0100, Paolo Bonzini wrote:
> >> Il 21/01/2014 16:51, Andreas Färber ha scritto:
> >>>>> We already do that for other bits (e.g. XSAVE/OSXSAVE),
> >>> Please point me to the commit, a search for xsave did not come up with a
> >>> commit changing such a thing - either it did not go through my queue or
> >>> it slipped me through: Bugs are no excuse to produce more bugs.
> >>
> >> I meant that "-cpu SandyBridge" with TCG produces a CPU that doesn't
> >> have XSAVE.
> >>
> >>>>> and in fact it
> >>>>> is the same that we do for KVM: the KVM_GET_SUPPORTED_CPUID result is
> >>>>> used to trim the generic feature bits.
> >>> Our model definitions are the place to put stuff that real CPUs have.
> >>> Either the CPU has it or it doesn't. If it does, then this patch is
> >>> fully correct and it's TCG's job to mask things out. If we're adding
> >>> artificial flags to the generic model definitions just to make KVM
> >>> faster, then it is wrong - we have a choice of post_initialize and
> >>> realize hooks for that.
> >>
> >> It would make TCG faster as well, and there would be no reason
> >> really to avoid the "artificial" x2apic on TCG, if TCG implemented
> >> x2apic at all.
> > 
> > So, the discussion seem to have stalled.
> > 
> > Andreas, are you still against the patch, after the arguments from Paolo
> > and me?
> 
> Yes, I am. I had proposed to discuss solutions at FOSDEM but Paolo was
> not there, so no solution yet.
> 
> My main concern still is that if a CPU does not have a certain feature
> we should not list it as one of its features but add it to its features
> where sensible. Just because TCG filters it out today is not keeping
> anyone from implementing it tomorrow, in which case the emulated CPUs
> would suddenly gain the feature. So my question still is, what rule can
> we apply for enabling x2apic? (something like greater or equal this
> family, etc. - then we can put it in your post_initialize hook so that
> users can still override it)

The rule needs be simple and predictable, so libvirt can know in advance
if the results are going to match what was requested by the user. That's
the problem with enabling it automatically based on some rule more
complex than the existing "it's enabled if it's on the CPU model table
or on the command-line" rule.

Considering the whole stack, there's a strong reason to enable x2apic by
default on all cases when using KVM. But if keeping the CPU model table
match real CPUs bit-by-bit is more important for QEMU (which I don't
understand why), I believe we will need to ask libvirt to do it (enable
x2apic explicitly in the command-line by default). Or we could make the
CPU models look different in KVM and TCG mode, but I am trying to avoid
that.

But I still believe that enabling x2apic by default in TCG _and_ KVM
mode is easier and harmless. I don't see the value in this academic
exercise of trying to make the CPU model table match real CPUs
bit-by-bit.

I mean: I simply don't expect any user will come to us complaining
because they didn't want x2apic to be enabled by default. And if some
user really cares about that, they can simply use "-x2apic" in the
command-line. In other words, if it is not going to affect users
negatively, why are we spending energy on it?

-- 
Eduardo




More information about the libvir-list mailing list