[libvirt] CPU type/flags when converting a physical machine to run on libvirt

Eduardo Habkost ehabkost at redhat.com
Fri May 2 20:17:42 UTC 2014


On Fri, May 02, 2014 at 08:10:47PM +0100, Richard W.M. Jones wrote:
> On Fri, May 02, 2014 at 03:53:31PM -0300, Eduardo Habkost wrote:
> > On Fri, May 02, 2014 at 01:07:05PM +0100, Richard W.M. Jones wrote:
> > >  - Should we try to reflect the CPU type of the physical machine in
> > >    the virtual machine?  eg. If it's an Opteron, we generate an
> > >    Opteron target machine.  (I believe the answer is *no*, because
> > >    this is not live migration, and most guests can boot on any
> > >    compatible CPU).
> > 
> > I see no reason to _not_ choose Opteron_Gx, if you know the host CPU is
> > always going to be an Opteron_Gx.
> 
> Thanks Eduardo.  I think I should clarify the use cases based on
> what you said here and below.
> 
> It's almost never (probably *never*) the case that the converted guest
> would run on the same host as it originated from.  The old and current
> virt-p2v programs would not let you do that (except with a lot of
> manual intervention).  And no RHEL customer who uses virt-p2v is
> interested in that scenario anyway.  They always want to migrate a
> physical machine to (eg) a pre-existing RHEV cluster and then recycle
> the original physical machine for something else.
> 
> So I guess we never know the target CPU.  What we do know in great
> detail is the current CPU that virt-p2v runs on (ie. the source CPU).

That's interesting. So you have many reasons to be conservative by
default, unless you (or the user) have additional information about the
target machine/cluster where the VM is going to run.

Is the p2v tool going to accept options similar to the virt-install
options? Except for additional conversion steps, the process look very
similar to the creation of a new VM: only the user (or management
software using the tool/library) has enough information to decide how
the new VM should really look like.

> 
> > If the user simply plans to convert an existing physical machine to a
> > single-VM machine and has no plans to ever migrate the VM, it makes
> > sense to use "-cpu host" (but beware: this may uncover a few QEMU bugs).
> > 
> > if the user plans to migrate the VM to a _similar_ host later, it makes
> > sense to use an existing CPU model name that matches the host CPU (see
> > "host-cpu-model" below).
> > 
> > If the user plans to migrate the VM to a very different host later it
> > makes sense to be more conservative and simply use the default CPU
> > model.
> > 
> > In other words: I don't know what's a good default because I don't know
> > your use case very well.
> > 
> > > 
> > >  - How can I ask libvirt to give me the best possible CPU, and not
> > >    some baseline?  Normally I use host-model, but I think that
> > >    prevents migration.
> > 
> > The best possible CPU is "-cpu host" (host-passthrough in libvirt), but
> > that doesn't allow migration. This may uncover QEMU bugs (but it is much
> > better today than it was 1 or 2 years ago).
> > 
> > The best possible CPU which allows migration is the one you get when
> > explicitly asking libvirt to expand host-model (including baseline +
> > flags). This is likely to uncover QEMU and libvirt bugs.
> > 
> > A safer option is to use only the base CPU model (not the additional
> > flags) provided by libvirt when asking about the host CPU model (I
> > believe this is called "host-cpu-model" on virt-manager code).
> 
> I think the issue I have is: If I get a baseline CPU, will it have
> features like SSE4?  Really I want to be non-specific about the target
> CPU, in the libvirt XML.  I don't want to exclude the target from
> having the best possible CPU features, but also I would like migration
> to work.

It should, and if you are not getting it, there are even more reasons to
worry about the safety of using "baseline+features" instead of just
"baseline" because it means something unexpected happened. It is not a
bug, strictly speaking, but something we want to avoid.

If you are getting "-cpu NotSoNiceCPU,+NiceFeature1,+NiceFeature2" as
result from libvirt instead of "-cpu NiceCPU" (which would already
contain NiceFeature1, NiceFeature2), please bug us (libvirt and QEMU
developers) so we can address it.

> 
> > >  - What CPU flags should be reflected in the target libvirt XML?
> > > 
> > >  - Is it worth modelling CPU thread layout?  (I suspect this will be a
> > >    lot of work with the potential to break things rather than provide
> > >    any measurable benefits.)
> > 
> > I wouldn't recommend this unless: 1) you know the VM will be kept
> > running in the same host or on a similar host; 2) you pin the
> > VCPUs/threads to corresponding host CPUs/threads.
> > 
> > This may also uncover QEMU bugs, so I wouldn't do this by default unless
> > the user explicitly asks for it.
> 
> OK.
> 
> > >  - Is there anything else I haven't thought about?
> > 
> > In the future you may want to support multi-node NUMA VMs. This is
> > similar to the multi-core/multi-thread case: it makes sense if you know
> > the VM is going to run on a host with similar topology, and if you
> > manually pin the guest nodes to the host nodes (something which is not
> > possible yet, but should be possible in the near future).
> 
> OK.  I guess we can record the original NUMA topology.

On both cases above (cores/sockets and NUMA), it doesn't make much sense
to try to copy the original topology unless you want to fine-tune VCPU
and memory pinning to maximize performance. So considering you don't
know much about the new host machine, I don't see a reason to try to
copy the original machine topology by default.

-- 
Eduardo




More information about the libvir-list mailing list