[libvirt] inconsistent handling of "qemu64" CPU model

Chris Friesen chris.friesen at windriver.com
Thu May 26 05:13:24 UTC 2016


Hi,

I'm not sure where the problem lies, hence the CC to both lists.  Please copy me 
on the reply.

I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a 
Celeron 2961Y CPU.  (libvirt detects it as a Nehalem with a bunch of extra 
features.)  Qemu gives version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.7~cloud2).

If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU, 
and /proc/cpuinfo in the guest instance looks something like this:

processor 0
vendor_id GenuineIntel
cpu family 6
model 6
model name: QEMU Virtual CPU version 2.2.0
stepping: 3
microcode: 0x1
flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush
mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt 
hypervisor lahf_lm abm vnmi ept


However, if I explicitly specify a custom CPU model of "qemu64" the instance 
refuses to boot and I get a log saying:

libvirtError: unsupported configuration: guest and host CPU are not compatible: 
Host CPU does not provide required features: svmlibvirtError: unsupported 
configuration: guest and host CPU are not compatible: Host CPU does not provide 
required features: svm

When this happens, some of the XML for the domain looks like this:
   <os>
     <type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
  ....

   <cpu mode='custom' match='exact'>
     <model fallback='allow'>qemu64</model>
     <topology sockets='1' cores='1' threads='1'/>
   </cpu>

Of course "svm" is an AMD flag and I'm running an Intel CPU.  But why does it 
work when I just rely on the default virtual CPU?  Is kvm_default_unset_features 
handled differently when it's implicit vs explicit?

If I explicitly specify a custom CPU model of "kvm64" then it boots, but of 
course I get a different virtual CPU from what I get if I don't specify anything.

Following some old suggestions I tried turning off nested kvm, deleting 
/var/cache/libvirt/qemu/capabilities/*, and restarting libvirtd.  Didn't help.

So...anyone got any ideas what's going on?  Is there no way to explicitly 
specify the model that you get by default?


Thanks,
Chris




More information about the libvir-list mailing list