[libvirt-users] locking domain memory

Ivan Borodin magwa.man at yandex.ru
Sat Nov 26 18:42:20 UTC 2016



On 11/22/2016 05:56 AM, Dennis Jacobfeuerborn wrote:
> On 21.11.2016 17:05, Michal Privoznik wrote:
>> On 18.11.2016 23:17, Dennis Jacobfeuerborn wrote:
>>> Hi,
>>> is there a way to lock a guests memory so it doesn't get swapped out? I
>>> now there is memoryBacking->locked but that says it requires
>>> memtune->hard_limit and the description of that basically "don't ever do
>>> this" rendering the locked element kind of pointless.
>>> How can I prevent the guests memory from being swapped out without
>>> shooting myself in the foot?
>>
>> There is no simple answer for this question. You have to know your
>> machines in order to know what to expect. Firstly, it doesn't makes much
>> sense to lock just guest memory, you need the hypervisor's memory too.
>> However, if hypervisor is capable of ballooning the guest's memory on
>> the fly (or there's is a memory leak in hypervisor), you want to put
>> some limit on how much can be actually locked. But once you put the
>> limit, kernel starts shooting if it is reached. Then, after you've done
>> some observation and saw that your qemu takes never more than X bytes,
>> take into account operations that are short lived - qemu allocates some
>> memory on hotplug, maybe on some excessive monitor communication too.
>> Who knows.
>> Anyway, you've taken the limit X, and added some say 10% on the top of
>> it - just to be sure, right? And then you upgrade. New binary does of
>> course have some parts rewritten and thus allocates different amounts of
>> memory. You see where I am going with this?
>>
>> Long story short: The problem of determining memory amount needed for a
>> process to run can be reduced to halting problem. QED.
> 
> I'm not sure why any of this doesn't matter if you don't lock the memory
> though. You may have a bit more leeway because of memory pages can be
> swapped out but if you have a memory leak you will eventually run out of
> swap space as well and run into the same problem.
> 
> The reason I'm interested in this is because I recently saw a MariaDB VM
> get intro trouble because the host decided to move memory pages to swap
> even though there were tens of gigabytes available ram. Sometime later
> these memory pages had to be swapped in which caused the guest to stall
> for a few seconds which caused queries to pile up which caused the guest
> to have a load of >200 for several minutes. This happened again a day
> later. The solution I then implemented was to disable swap completely on
> the host. This "fixed" the problem since now the host couldn't swap out
> memory even if it wanted to.
> 
> My problem with this solution is that it is rather ham-fisted because I
> don't really want to disable swap for the entire system I only want to
> prevent this one qemu process from being able to use swap.
> 
> Also keep in mind that we are talking about a guest that doesn't use
> ballooning and that doesn't overcommit memory on the host (otherwise
> disabling swap would obviously have severe consequences).
> 
> I'm still not sure why hard_limit is required. Yes if there is a leak
> then the process will keep growing and eventually the host will die but
> as I mentioned above this is also the case without locked memory it will
> just happen a little bit later.
> 
Excuse me if it's irrelevant, but you can use cgroups to limit or
disable swap usage per process.




More information about the libvirt-users mailing list