[PATCH] docs: Describe protected virtualization guest setup

Christian Borntraeger borntraeger at de.ibm.com
Wed Apr 29 14:09:51 UTC 2020

On 29.04.20 15:25, Daniel P. Berrangé wrote:
> On Wed, Apr 29, 2020 at 10:19:20AM -0300, Daniel Henrique Barboza wrote:
>> On 4/28/20 12:58 PM, Boris Fiuczynski wrote:
>>> From: Viktor Mihajlovski <mihajlov at linux.ibm.com>
>> [...]
>>> +
>>> +If the check fails despite the host system actually supporting
>>> +protected virtualization guests, this can be caused by a stale
>>> +libvirt capabilities cache. To recover, run the following
>>> +commands
>>> +
>>> +::
>>> +
>>> +   $ systemctl stop libvirtd
>>> +   $ rm /var/cache/libvirt/qemu/capabilities/*.xml
>>> +   $ systemctl start libvirtd
>>> +
>>> +
>> Why isn't Libvirt re-fetching the capabilities after host changes that affects
>> KVM capabilities? I see that we're following up QEMU timestamps to detect
>> if the binary changes, which is sensible, but what about /dev/kvm? Shouldn't
>> we refresh domain capabilities every time following a host reboot?
> Caching of capabilities was done precisely  to avoid refreshing on every boot
> because it resulted in slow startup for apps using libvirt after boot.

Caching beyond the life time of /dev/kvm seems broken from a kernel perspective.
It is totally valid to load a new version of the kvm module with new capabilities.

I am curious, Why did it take so long? we should - on normal system - only 
refresh _one_ binary as most binaries are just TCG.

As Boris said, we are going to provide yet another check (besides the nested

But in general I think invalidating the cache for that one and only binary
that provides KVM after a change of /dev/kvm seems like the proper solution.

> We look for specific features that change as a way to indicate a refresh
> is needed.  If there's a need to delete the capabilities manually that
> indicates we're missing some feature when deciding whether the cache is
> stale.

More information about the libvir-list mailing list