[libvirt-users] Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)

Daniel P. Berrangé berrange at redhat.com
Thu Feb 8 14:59:33 UTC 2018


On Thu, Feb 08, 2018 at 02:47:26PM +0100, David Hildenbrand wrote:
> > Sure, I do understand that Red Hat (or any other vendor) is taking no
> > support responsibility for this. At this point I'd just like to
> > contribute to a better understanding of what's expected to definitely
> > _not_ work, so that people don't bloody their noses on that. :)
> 
> Indeed. nesting is nice to enable as it works in 99% of all cases. It
> just doesn't work when trying to migrate a nested hypervisor. (on x86)

Hmm, if migration of the L1 is going to cause things to crash and
burn, then ideally libvirt on L0 would block the migration from being
done.

Naively we could do that if the guest has vmx or svm features in its
CPU, except that's probably way too conservative as many guests with
those features won't actually do any nested VMs.  It would also be
desirable to still be able to migrate the L1, if no L2s are running
currently.

Is there any way QEMU can expose whether there's any L2s activated
to libvirt, so we can prevent migration in that case ? Or should
QEMU itself refuse to start migration perhaps ?


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvirt-users mailing list