[vfio-users] Dynamically allocating hugepages?

Thomas Lindroth thomas.lindroth at gmail.com
Wed Jul 6 21:48:09 UTC 2016


Hugetlbfs requires reservation of huge pages before files can be put on
it. The easiest way of doing that is adding a hugepages argument to the
kernel but that permanently reserves the pages and I don't want to waste
8G of ram all the time. Another way is to dynamically allocate them by
echoing 4096 into /proc/sys/vm/nr_hugepages but if the computer has been
running for more than an hour I'll be lucky to get 20 pages like that.

The physical ram is too fragmented for dynamic allocation of huge pages
but there is a workaround. The quickest way to defragment a hard drive
is to delete all files and the fastest way to defragment ram is do drop
caches. By running echo 3 > /proc/sys/vm/drop_caches before echo 4096 >
/proc/sys/vm/nr_hugepages the allocation is much more likely to succeed
but not guaranteed. Application memory could still be too fragmented.
For that I would echo 1 > /proc/sys/vm/compact_memory which should
compact all free space into continuous areas. I've never tried to
compact memory because cache dropping is usually enough when using 2M
huge pages.

Is there no better way of doing this? The kernel could selectively drop
cache pages to make huge pages without dropping all caches and if that
is not enough it could compact only the memory needed. I've looked for
an option like that but I've haven't found anything. The closest thing
I've seen is echo "always" >/sys/kernel/mm/transparent_hugepage/defrag
but that is only for transparent huge pages.




More information about the vfio-users mailing list