[libvirt] [PATCH v2 2/2] qemu: Set up EMULATOR thread and cpuset.mems before exec()-ing qemu

Martin Kletzander mkletzan at redhat.com
Tue Apr 16 11:42:04 UTC 2019


On Mon, Apr 15, 2019 at 06:32:32PM +0200, Michal Privoznik wrote:
>On 4/15/19 3:47 PM, Martin Kletzander wrote:
>> On Wed, Apr 10, 2019 at 06:10:44PM +0200, Michal Privoznik wrote:
>>> It's funny how this went unnoticed for such a long time. Long
>>> story short, if a domain is configured with
>>> VIR_DOMAIN_NUMATUNE_MEM_STRICT libvirt doesn't really honour
>>> that. This is because of 7e72ac787848 after which libvirt allowed
>>> qemu to allocate memory just anywhere and only after that it used
>>> some magic involving cpuset.memory_migrate and cpuset.mems to
>>> move the memory to desired NUMA nodes. This was done in order to
>>> work around some KVM bug where KVM would fail if there wasn't a
>>> DMA zone available on the NUMA node. Well, while the work around
>>> might stopped libvirt tickling the KVM bug it also caused a bug
>>> on libvirt side: if there is not enough memory on configured NUMA
>>> node(s) then any attempt to start a domain must fail. Because of
>>> the way we play with guest memory domains can start just happily.
>>>
>>> The solution is to move the child we've just forked into emulator
>>> cgroup, set up cpuset.mems and exec() qemu only after that.
>>>
>>
>> So you are saying this was a bug in KVM?  Is it fixed now?  I am not
>> against
>> this patch, I hated that I had to do the workaround, but I just want to
>> be sure
>> we won't start hitting that again.
>
>Yes, that's what I'm saying. Looks like the KVM bug is fixed now because
>with a Fedora 29 on a NUMA machine I can start domains just fine.
>

What I was saying was that it would be nice to have some proof for this instead
of guesswork.  I, however, acknowledge that this might not be easy, or even
possible (the first patch that introduced the need for the initial workaround
was not pinpointed, at least not to my knowledge).  Just make sure that when
checking for this, you strictly required all the allocations to be done from
node not mentioned in output of:

  cat /proc/zoneinfo | grep DMA

and also that you used multiple vCPUs.  If you can also hotplug an extra vCPU
later on, then the test is perfect enough for me to justify this change [1].

Martin

[1] If you feel like looking up (bisecting) the kernel commit that used this,
    I'm _not_ standing in your way ;)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20190416/5d908e24/attachment-0001.sig>


More information about the libvir-list mailing list