[libvirt-users] Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)

Daniel P. Berrangé berrange at redhat.com
Thu Feb 8 11:40:48 UTC 2018


On Thu, Feb 08, 2018 at 12:34:24PM +0100, Kashyap Chamarthy wrote:
> On Thu, Feb 08, 2018 at 11:46:24AM +0100, Kashyap Chamarthy wrote:
> > On Wed, Feb 07, 2018 at 11:26:14PM +0100, David Hildenbrand wrote:
> > > On 07.02.2018 16:31, Kashyap Chamarthy wrote:
> > 
> > [...]
> > 
> > > Sounds like a similar problem as in
> > > https://bugzilla.kernel.org/show_bug.cgi?id=198621
> > > 
> > > In short: there is no (live) migration support for nested VMX yet. So as
> > > soon as your guest is using VMX itself ("nVMX"), this is not expected to
> > > work.
> > 
> > Actually, live migration with nVMX _does_ work insofar as you have
> > _identical_ CPUs on both source and destination — i.e. use the QEMU
> > '-cpu host' for the L1 guests.  At least that's been the case in my
> > experience.  FWIW, I frequently use that setup in my test environments.
> 
> Correcting my erroneous statement above: For live migration to work in a
> nested KVM setup, it is _not_ mandatory to use "-cpu host".

Yes, assuming the L1 guests both get given the same CPU model, then you
can use any CPU model at all for the L2 guests and still be migrate safe,
since your L1 guests provide homogeneous hardware to host L2, regardless
of whether the L0 host is homogeneous.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvirt-users mailing list