[libvirt] [PATCH] Fix host CPU counting on unusual NUMA topologies

Daniel P. Berrange berrange at redhat.com
Wed Nov 24 14:52:32 UTC 2010


On Wed, Nov 24, 2010 at 02:35:37PM +0100, Jiri Denemark wrote:
> The nodeinfo structure includes
> 
>     nodes   : the number of NUMA cell, 1 for uniform mem access
>     sockets : number of CPU socket per node
>     cores   : number of core per socket
>     threads : number of threads per core
> 
> which does not work well for NUMA topologies where each node does not
> consist of integral number of CPU sockets.
> 
> We also have VIR_NODEINFO_MAXCPUS macro in public libvirt.h which
> computes maximum number of CPUs as (nodes * sockets * cores * threads).
> 
> As a result, we can't just change sockets to report total number of
> sockets instead of sockets per node. This would probably be the easiest
> since I doubt anyone is using the field directly. But because of the
> macro, some apps might be using sockets indirectly.
> 
> This patch leaves sockets to be the number of CPU sockets per node (and
> fixes qemu driver to comply with this) on machines where sockets can be
> divided by nodes. If we can't divide sockets by nodes, we behave as if
> there was just one NUMA node containing all sockets. Apps interested in
> NUMA should consult capabilities XML, which is what they probably do
> anyway.
> 
> This way, the only case in which apps that care about NUMA may break is
> on machines with funky NUMA topology. And there is a chance libvirt
> wasn't able to start any guests on those machines anyway (although it
> depends on the topology, total number of CPUs and kernel version).
> Nothing changes at all for apps that don't care about NUMA.
> 
> Notes:
>     * Testing on 4 sockets, 12 cores each, 8 NUMA nodes
>       Xen (RHEL-5) hypervisor with numa=on:
>         - xm info
>             nr_cpus                : 48
>             nr_nodes               : 8
>             sockets_per_node       : 0
>             cores_per_socket       : 12
>             threads_per_core       : 1
>         - virsh nodeinfo
>             CPU(s):              48
>             CPU socket(s):       4
>             Core(s) per socket:  12
>             Thread(s) per core:  1
>             NUMA cell(s):        1
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 8
>       QEMU driver:
>         - virsh nodeinfo
>             CPU(s):              48
>             CPU socket(s):       4
>             Core(s) per socket:  12
>             Thread(s) per core:  1
>             NUMA cell(s):        1
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 8
>     
>     * 2 sockets, 4 cores each, 2 NUMA nodes
>       Xen (RHEL-5) hypervisor with numa=on:
>         - xm info
>             nr_cpus                : 8
>             nr_nodes               : 2
>             sockets_per_node       : 1
>             cores_per_socket       : 4
>             threads_per_core       : 1
>         - virsh nodeinfo
>             CPU(s):              8
>             CPU socket(s):       1
>             Core(s) per socket:  4
>             Thread(s) per core:  1
>             NUMA cell(s):        2
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 2
>       QEMU driver:
>         - virsh nodeinfo
>             CPU(s):              8
>             CPU socket(s):       1
>             Core(s) per socket:  4
>             Thread(s) per core:  1
>             NUMA cell(s):        2
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 2
>     
>     * uniform memory architecture, 2 sockets, 4 cores each
>       Xen (RHEL-5) hypervisor:
>         - xm info
>             nr_cpus                : 8
>             nr_nodes               : 1
>             sockets_per_node       : 2
>             cores_per_socket       : 4
>             threads_per_core       : 1
>         - virsh nodeinfo
>             CPU(s):              8
>             CPU socket(s):       2
>             Core(s) per socket:  4
>             Thread(s) per core:  1
>             NUMA cell(s):        1
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 1
>       Xen (upstream) hypervisor:
>         - xm info
>             nr_cpus                : 8
>             nr_nodes               : 1
>             cores_per_socket       : 4
>             threads_per_core       : 1
>         - virsh nodeinfo
>             CPU(s):              8
>             CPU socket(s):       2
>             Core(s) per socket:  4
>             Thread(s) per core:  1
>             NUMA cell(s):        1
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 1
>       QEMU driver:
>         - virsh nodeinfo
>             CPU(s):              8
>             CPU socket(s):       2
>             Core(s) per socket:  4
>             Thread(s) per core:  1
>             NUMA cell(s):        1
>         - virsh capabilities
>             /capabilities/host/topology/cells at num = 1
> ---
>  include/libvirt/libvirt.h.in |    9 ++++++---
>  src/nodeinfo.c               |   10 ++++++++++
>  src/xen/xend_internal.c      |   19 ++++++++++++++-----
>  3 files changed, 30 insertions(+), 8 deletions(-)

ACK

Daniel




More information about the libvir-list mailing list