Does 'numad' interacts with memory_migration with 'numatune'?

Daniel P. Berrangé berrange at redhat.com
Mon Jun 15 12:25:46 UTC 2020


On Thu, Jun 11, 2020 at 01:01:41PM -0300, Daniel Henrique Barboza wrote:
> Hi,
> 
> While investigating a 'virsh numatune' behavior in Power 9 guests I came
> across this doubt and couldn't find a direct answer.
> 
> numad role, as far as [1] goes, is automatic NUMA affinity only. As far as
> Libvirt and my understanding goes , numad is used for placement='auto' setups,
> which aren't even allowed for numatune operations in the first place.

Yes, libvirt's only use of numad is as a one-shot advisory. ie when we start
the guest, we ask numad to suggest a node to place it on. Thereafter we don't
talk to numad at all.

Numad can also be run in a mode where it proactively re-pins processes to
re-balance NUMA nodes. AFAIK, it should ignore any QEMU processes when doing
this, as changing pinning of QEMU behind libvirt's back is not supported.

> Problem is that I'm not sure if the mere presence of numad running in the
> host might be accelerating the memory migration triggered by numatune,
> regardless of placement settings. My first answer would be no, but several
> examples in the internet shows all the RAM in the guest being migrated
> from one NUMA node to the other almost instantly*, and aside from them being
> done in x86 I wonder whether numad is having any impact on that.

AFAIK, numad merely changes pinning of processes. It is reliant on the
kernel to actually move memory regions around if pinning changed the
best NUMA node to allocate from.

> The reason I'm asking is because I don't have a x86 setup with multiple
> NUMA nodes to compare results, and numad is broken sparse NUMA setups for some
> time now ([2] tells the story if you're interested), and Power 8/9 happens
> to operate with sparse NUMA setups, so no numad for me.

FWIW, since QEMU can emulate NUMA, so you can create yourself a virtual host
with multiple NUMA nodes for sake of testing:

https://www.berrange.com/posts/2017/02/16/setting-up-a-nested-kvm-guest-for-developing-testing-pci-device-assignment-with-numa/

> If someone can confirm my suspicion (i.e. numad has no interference in NUMA
> memory migration triggered by numatune) I appreciate.

I believe that is correct.

> DHB
> 
> 
> * or at very least no one cared to point out that the memory is migrated
> according to the paging demanding of the guest, as I see happen in Power
> guests and working as intended according to kernel cgroup docs.
> 
> 
> 
> [1] https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-tool_reference-numad
> 
> [2] https://bugs.launchpad.net/ubuntu/+source/numad/+bug/1832915
> 

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvir-list mailing list