[vfio-users] [QEMU+OVMF+VIFO] Memory Prealloc
Bryan Angelo
bangelo at gmail.com
Tue Nov 22 22:35:56 UTC 2022
Related follow up.
When I add memory to a running VM via hotplug, QEMU preallocates this
memory too (as expected based on your explanation). When I subsequently
remove memory added to the VM via hotplug, QEMU does not always appear to
free the underlying memory.
For example:
-m 8G,slots=1,maxmem=12G
QEMU using 8G, VM shows 8G total.
object_add memory-backend-ram,id=mem1,size=4G
device_add pc-dimm,id=dimm1,memdev=mem1
QEMU using 12G, VM shows 12G total.
After using the VM for a bit:
device_del dimm1
object_del mem1
QEMU using 12G, VM shows 8G total.
Does it just so happen that the VFIO device is using memory that QEMU
allocated/pinned for the hotplug device and therefore QEMU cannot free it?
Or is there something else going on here?
Thanks.
On Sun, Nov 20, 2022, 16:24 Bryan Angelo <bangelo at gmail.com> wrote:
> Thanks for the clear explanation and detail.
>
> On Sun, Nov 20, 2022, 17:54 Alex Williamson <alex.williamson at redhat.com>
> wrote:
>
>> On Sun, 20 Nov 2022 16:36:58 -0800
>> Bryan Angelo <bangelo at gmail.com> wrote:
>>
>> > When passing-through via vfio-pci using QEMU 7.1.0 and OVMF, it appears
>> > that qemu preallocates all guest system memory.
>> >
>> > qemu-system-x86_64 \
>> > -no-user-config \
>> > -nodefaults \
>> > -nographic \
>> > -rtc base=utc \
>> > -boot strict=on \
>> > -machine pc,accel=kvm,dump-guest-core=off \
>> > -cpu host,migratable=off \
>> > -smp 8 \
>> > -m size=8G \
>> > -overcommit mem-lock=off \
>> > -device vfio-pci,host=03:00.0 \
>> > ...
>> >
>> > PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
>> > 4151 root 20 0 13560.8m *8310.8m* 100.0 52.6 0:25.06 S
>> > qemu-system-x86_64
>> >
>> >
>> > If I remove just the vfio-pci device argument, it appears that qemu no
>> > longer preallocates all guest system memory.
>> >
>> > PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
>> > 5049 root 20 0 13414.0m *762.4m* 0.0 4.8 0:27.06 S
>> > qemu-system-x86_64
>> >
>> >
>> > I am curious if anyone has any context on or experience with this
>> > functionality. Does anyone know if preallocation is a requirement for
>> VFIO
>> > with QEMU or if preallocation can be disabled?
>> >
>> > I am speculating that QEMU is actually preallocating as opposed to the
>> > guest touching every page of system memory.
>>
>>
>> This is a necessary artifact of device assignment currently. Any memory
>> that can potentially be a DMA target for the assigned device needs to be
>> pinned in the host. By default, all guest memory is potentially a DMA
>> target, therefore all of guest memory is pinned. A vIOMMU in the guest
>> can reduce the memory footprint, but the guest will still initially pin
>> all memory as the vIOMMU is disabled at guest boot/reboot, but this
>> also trades VM memory footprint for latency, as dynamic mappings
>> through a vIOMMU to the host IOMMU is a long path.
>>
>> Eventually, devices supporting Page Request Interface capabilities can
>> help to alleviate this, by essentially faulting DMA pages, much like
>> the processor does for memory. Support for this likely requires new
>> hardware and software though. Thanks,
>>
>> Alex
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20221122/80a4c341/attachment.htm>
More information about the vfio-users
mailing list