[PATCH v3 00/15] Introduce virtio-mem <memory/> model

David Hildenbrand david at redhat.com
Thu Jun 17 12:03:44 UTC 2021


On 17.06.21 13:18, Michal Prívozník wrote:
> On 6/17/21 11:44 AM, David Hildenbrand wrote:
>> On 17.03.21 12:57, Michal Privoznik wrote:
>>> v3 of:
>>>
>>> https://listman.redhat.com/archives/libvir-list/2021-February/msg00961.html
>>>
>>>
>>> diff to v2:
>>> - Dropped code that forbade use of virtio-mem and memballoon at the same
>>>     time;
>>> - This meant that I had to adjust memory accounting,
>>>     qemuDomainSetMemoryFlags() - see patches 11/15 and 12/15 which are
>>> new.
>>> - Fixed small nits raised by Peter in his review of v2
>>>
>>>
>>
>> Hi Michal, do you have a branch somewhere that I can easily checkout to
>> play with it/test it?
>>
> 
> Yes:
> 
> https://gitlab.com/MichalPrivoznik/libvirt/-/tree/virtio_mem_v4
> 
> There were some comments in Peter's review and I really should fix my
> code according to them and merge/send v4.
> 
> Michal
> 

Thanks! I started with a single virtio-mem device.

1. NUMA requirement

Right now, one can really only configure "maxMemory" with NUMA specified,
otherwise there will be "error: unsupported configuration: At least
one numa node has to be configured when enabling memory hotplug".

I recall this is a limitation of older QEMU which would not create ACPI SRAT
tables otherwise. In QEMU, this is no longer the case. As soon as "maxmem"
is specified on the QEMU cmdline, we fake a single NUMA node:

hw/core/numa.c:numa_complete_configuration()

"Enable NUMA implicitly by adding a new NUMA node automatically"
-> m->auto_enable_numa_with_memdev / mc->auto_enable_numa_with_memhp

m->auto_enable_numa_with_memdev (slots=0) is set since 5.1 on x86-64 and arm64
m->auto_enable_numa_with_memhp (slots>0) is set since 2.11 on x86-64 and 4.1 on arm64

So in theory, with newer QEMU on x86-64 and arm64 we could drop that
limitation in libvirt (might require some changes eventually
regarding the "node" specification handling). ppc64 shouldn't care as there is no ACPI.


2. "virsh update-memory-device"

My domain looks something like:

<domain type='kvm' id='3'>
   <name>Fedora 34</name>
...
   <maxMemory slots='16' unit='KiB'>20971520</maxMemory>
   <memory unit='KiB'>20971520</memory>
   <currentMemory unit='KiB'>4194304</currentMemory>
   <vcpu placement='static'>16</vcpu>
...
   <cpu mode='custom' match='exact' check='full'>
     <numa>
       <cell id='0' cpus='0-15' memory='4194304' unit='KiB'/>
     </numa>
   </cpu>
...
   <devices>
...
     <memory model='virtio-mem'>
       <target>
         <size unit='KiB'>16777216</size>
         <node>0</node>
         <block unit='KiB'>2048</block>
         <requested unit='KiB'>524288</requested>
         <actual unit='KiB'>0</actual>
       </target>
       <alias name='virtiomem0'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
     </memory>

   </devices>
...


It starts just fine and I can query the current properties

# virsh dumpxml "Fedora 34"
...
     <memory model='virtio-mem'>
       <target>
         <size unit='KiB'>16777216</size>
         <node>0</node>
         <block unit='KiB'>2048</block>
         <requested unit='KiB'>524288</requested>
         <actual unit='KiB'>524288</actual>
       </target>
       <alias name='virtiomem0'/>
       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
     </memory>

Trying to update the requested size, however results in

[root at virtlab704 ~]# virsh update-memory-device "Fedora 34" --requested-size 0
error: unsupported configuration: Attaching memory device with size '16777216' would exceed domain's maxMemory config size '20971520'

Tried with "--alias" with the same result.

Is it maybe an issue regarding handling of "maxMemory" vs "memory" ?


3. "memory" item handling

Whenever I edit the XML and set e.g., "<memory unit='GiB'>4</memory>", it's silently converted back to "20 GiB".
Maybe that's just always implicitly calculated from the NUMA spec and the defined devices.


Thanks!

-- 
Thanks,

David / dhildenb




More information about the libvir-list mailing list