[vfio-users] Memory allocation with numatune

Jan Wiele jan at wiele.org
Mon Jan 30 16:59:46 UTC 2017


Hi Blair, 
that's it! Thanks! My QEMU command line now features:
> -object memory-backend-ram,id=ram-node0,size=7516192768,host-
nodes=1,policy=bind 
> -numa node,nodeid=0,cpus=0-11,memdev=ram-node0

But I think it is supposed to work with the arguments I originally specified, 
since [1] does not mention a dependency on the <numa/> tag and libvirt does 
not return an error when saving the configuration. Or am I missing something?

I will test this again on libvirt 3.0 when this [2] bugfix landed in the Arch 
repos. If this behavior still exists in v3, I will contact the libvirt 
mailinglist/IRC.

Cheers,
Jan


[1] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/
html/Virtualization_Tuning_and_Optimization_Guide/sect-
Virtualization_Tuning_Optimization_Guide-NUMA-NUMA_and_libvirt.html#sect-
Virtualization_Tuning_Optimization_Guide-NUMA-NUMA_and_libvirt-
Domain_Processes

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1413773


Am Montag, 30. Januar 2017, 11:30:01 CET schrieb Blair Bethwaite:
> Hi Jan,
> 
> Looking back at your original pastebin I see you are not explicitly
> specifying guest numa details. Perhaps your numatune needs the memnode
> element, not sure if that is significant here but you could try:
> 
> <numatune>
>   <memory mode='strict' nodeset='1'/>
>   <memnode cellid='0' mode='strict' nodeset='1'/>
> </numatune>
> 
> and then:
> 
> <cpu mode='host-model'>
>   <model fallback='allow'/>
>   <topology sockets='1' cores='6' threads='2'/>
>   <numa>
>     <cell id='0' cpus='0-11' memory='7340032' unit='KiB'/>
>   </numa>
> </cpu>
> 
> Cheers,
> 
> On 30 January 2017 at 07:02, Jan Wiele <jan at wiele.org> wrote:
> > Am 2017-01-29 16:33, schrieb Alex Williamson:
> >> On Sat, 28 Jan 2017 12:58:41 +0100
> >> 
> >> Thomas Lindroth <thomas.lindroth at gmail.com> wrote:
> >>> On 01/27/2017 04:52 PM, Alex Williamson wrote:
> >>> > vcpupin actually looks a little off though, just like on the host, the
> >>> > VM is going to enumerate cores then threads
> >>> 
> >>> Are you sure about that? Perhaps there is some numa trick I don't
> >>> understand but it looks like the guest is a regular 6 cores
> >>> with HT (<topology sockets='1' cores='6' threads='2'/>) and
> >>> for me setups like that results in a layouts like this in the
> >>> guest (qemu-2.8.0):
> >>> 
> >>> cat /proc/cpuinfo |grep "core id"
> >>> core id         : 0
> >>> core id         : 0
> >>> core id         : 1
> >>> core id         : 1
> >>> core id         : 2
> >>> core id         : 2
> >>> 
> >>> The ordering in the guest is threads then cores but for the
> >>> host it's the other way around
> >>> 
> >>> cat /proc/cpuinfo |grep "core id"
> >>> core id         : 0
> >>> core id         : 1
> >>> core id         : 2
> >>> core id         : 3
> >>> core id         : 0
> >>> core id         : 1
> >>> core id         : 2
> >>> core id         : 3
> >>> 
> >>> Cores then threads. So to get a 1:1 match I have to use
> >>> pinning like this which is similar to that numa setup.
> >>> 
> >>> <vcpupin vcpu='0' cpuset='1'/>
> >>> <vcpupin vcpu='1' cpuset='5'/>
> >>> <vcpupin vcpu='2' cpuset='2'/>
> >>> <vcpupin vcpu='3' cpuset='6'/>
> >>> <vcpupin vcpu='4' cpuset='3'/>
> >>> <vcpupin vcpu='5' cpuset='7'/>
> >>> <topology sockets='1' cores='3' threads='2'/>
> >> 
> >> Oh wow, maybe I'm completely off.  I hadn't noticed the core-id
> >> ordering in the VM, but indeed on my system the guest shows:
> >> 
> >> core id         : 0
> >> core id         : 0
> >> core id         : 1
> >> core id         : 1
> >> core id         : 2
> >> core id         : 2
> >> core id         : 3
> >> core id         : 3
> >> 
> >> Any physical system I've seen has the ordering you show above,
> >> enumerating cores then threads.  Sorry Jan, maybe the way you had it
> >> originally was optimal.  Thanks,
> >> 
> >> Alex
> > 
> > Ah, good to know! I will change my config back to the original one.
> > I was curious how I could get this information on a Windows system. After
> > some googling I found coreinfo [1], which reports this:
> > 
> > Logical to Physical Processor Map:
> > **----------  Physical Processor 0 (Hyperthreaded)
> > --**--------  Physical Processor 1 (Hyperthreaded)
> > ----**------  Physical Processor 2 (Hyperthreaded)
> > ------**----  Physical Processor 3 (Hyperthreaded)
> > --------**--  Physical Processor 4 (Hyperthreaded)
> > ----------**  Physical Processor 5 (Hyperthreaded)
> > 
> > But I'm still having my memory allocation problem: Can someone confirm
> > that
> > the numatune option actually works? I'm interested in the
> > Kernel/Qemu/Libvirt version.
> > 
> > Jan
> > 
> > [1] https://technet.microsoft.com/en-us/sysinternals/cc835722.aspx
> > 
> > 
> > _______________________________________________
> > vfio-users mailing list
> > vfio-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/vfio-users




More information about the vfio-users mailing list