[vfio-users] Dynamically allocating hugepages?

Ivan Volosyuk ivan.volosyuk at gmail.com
Mon Jul 11 01:29:31 UTC 2016


Given that I have /var/tmp mounted using tmpfs I use the following shell
script to allocate enough pages. I just introduce memory pressure and let
kernel do the rest:
MEM=6144 # in Mb
HUGEPAGES_NR="$(($MEM/$(($(grep Hugepagesize /proc/meminfo | awk '{print
$2}')/1024))))"

echo $HUGEPAGES_NR > /proc/sys/vm/nr_hugepages

for i in $(seq 10)
do
  echo $HUGEPAGES_NR >/proc/sys/vm/nr_hugepages
  nr=$(cat /proc/sys/vm/nr_hugepages);
  echo $nr
  if [ $nr -eq $HUGEPAGES_NR ]
  then
    break
  fi
  sleep 1
  dd if=/dev/zero of=/var/tmp/mem count=$HUGEPAGES_NR bs=1048576
  rm /var/tmp/mem
done


On Sun, Jul 10, 2016 at 4:03 PM Jesse Kennedy <freebullets0 at gmail.com>
wrote:

> I'm also interested in this. Running your 3 commands only gave me 3650
> hugepages on a 32 GB system. I wonder if there is a way to have qemu use
> available hugepages first and then fall back on normal memory once
> depleted.
>
> On Wed, Jul 6, 2016 at 2:48 PM, Thomas Lindroth <thomas.lindroth at gmail.com
> > wrote:
>
>> Hugetlbfs requires reservation of huge pages before files can be put on
>> it. The easiest way of doing that is adding a hugepages argument to the
>> kernel but that permanently reserves the pages and I don't want to waste
>> 8G of ram all the time. Another way is to dynamically allocate them by
>> echoing 4096 into /proc/sys/vm/nr_hugepages but if the computer has been
>> running for more than an hour I'll be lucky to get 20 pages like that.
>>
>> The physical ram is too fragmented for dynamic allocation of huge pages
>> but there is a workaround. The quickest way to defragment a hard drive
>> is to delete all files and the fastest way to defragment ram is do drop
>> caches. By running echo 3 > /proc/sys/vm/drop_caches before echo 4096 >
>> /proc/sys/vm/nr_hugepages the allocation is much more likely to succeed
>> but not guaranteed. Application memory could still be too fragmented.
>> For that I would echo 1 > /proc/sys/vm/compact_memory which should
>> compact all free space into continuous areas. I've never tried to
>> compact memory because cache dropping is usually enough when using 2M
>> huge pages.
>>
>> Is there no better way of doing this? The kernel could selectively drop
>> cache pages to make huge pages without dropping all caches and if that
>> is not enough it could compact only the memory needed. I've looked for
>> an option like that but I've haven't found anything. The closest thing
>> I've seen is echo "always" >/sys/kernel/mm/transparent_hugepage/defrag
>> but that is only for transparent huge pages.
>>
>> _______________________________________________
>> vfio-users mailing list
>> vfio-users at redhat.com
>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160711/4920b490/attachment.htm>


More information about the vfio-users mailing list