[libvirt] SetMaxMemory vs. SetMemory

Daniel P. Berrange berrange at redhat.com
Fri May 22 09:41:29 UTC 2009


On Thu, May 21, 2009 at 08:37:18PM +0200, Chris Lalancette wrote:
> Matthias Bolte wrote:
> > Hello,
> > 
> > I just took a look at the driver functions for SetMaxMemory and
> > SetMemory, as they are not implemented yet for the ESX driver and
> > Daniel Veillard was a bit surprised that they are missing, as he
> > expects them to be simple to implement. The problem is that I'm not
> > sure how the memory model of ESX maps to SetMaxMemory vs. SetMemory.
> > 
> > An ESX virtual machine has a defined memory size. That's the size
> > reported to the guest OS as the available "physical" memory size.
> > Beside this ESX allows the user to control how the hypervisor
> > satisfies this "physical" memory size. You can define a reservation
> > and an upper limit. The hypervisor will at least use the reserved
> > amount of real physical memory to satisfy the "physical" memory size,
> > but will not use more than the upper limit.
> > 
> > How does this map to SetMaxMemory and SetMemory? My first assumption
> > was, SetMaxMemory defines the "physical" memory size and SetMemory
> > defines the upper limit of real physical memory to satisfy the
> > "physical" memory size. This assumption seems to be in sync with the
> > QEMU driver from just looking at the code. But with Xen it seems to be
> > different. If I call SetMaxMemory and SetMemory with 2GB than free
> > inside the domain reports 2GB total memory. After I call SetMemory
> > with 1GB, free reports 1GB of total memory.
> > 
> > I'm confused. So, what is the intended semantic for SetMaxMemory and SetMemory?
> 
> Well, this is because of a peculiarity with Xen PV domains.  In Xen PV guests,
> you specify a "maxmem" and a "memory" parameter in the configuration file.  The
> "maxmem" parameter is presented to the guest as the end of the e820 map, hence
> the end of real memory as far as the guest is concerned (you can see that in the
> output of dmesg from the guest).  When the balloon driver in the guest loads, it
> will "allocate" (maxmem - memory), so that free inside the guest looks like it
> only has 1GB.  Later on, you can balloon back up, which means that the balloon
> driver "releases" memory back to the domain (but never above the maxmem
> parameter, since that's what's in the e820 map for the guest).
> 
> I would say for ESX driver, you probably want to follow the Qemu model; it's the
> model that KVM and even Xen FV guests follow, so seems to be more common.

Actually QEMU, KVM, Xen PV, Xen FV all  follow the same model, providing
you have the balloon driver available in the guest. 'maxmem', confusingly
calling <memory> in the XML sets the maximum possible memory  for the
guest, as exposed in e820 maps. When the guest runs this maximum can be
reduced setting 'memory', confusingly called <currentMemory> in the XML,
to a lower value. The hosts talks to the balloon driver in the guest and
asks it to release memory. This isn't a guarenteed lower limit, since it
relies on guest cooperation, but at least the guest is aware of what
the host it telling it todo.

Depending on bugs in the guest ballloon driver, the 'free' command may or
may not, update the 'total memory' setting in the guest. 

What Matthias is talking about wrt  VMWare ESX is about how the hypervisor
satisfies the memory allocation for the guest, eg how much real RAM ist
guarantees, with the rest of guest RAM susceptible to swapping. This is
more of a tuning parameter, and does not map onto the libvirt memory/maxmem
settings.

KVM in combination with cgroups, actually has a similar tuning ability
where we can use the cgroups to control how much physical RAM is available
to the guest, with the rest of its allocated RAM being susceptible to
swapping on the host.

These tuning capabilities would actually work in conjunction with the
ballooning capabilities. eg, if you allocated maxmem=1G, and set mem=500M,
then you would also set the tuning param for allowed  physical allocation
on the host to 500m. This allows you to both tell the guest to reduce its
usage via the baloon driver, and at the same time enforce it from the host
side so you don't get overcommitt of physical RAM.

We don't have anywhere to expose these tuning knobs though...

Daniel
-- 
|: Red Hat, Engineering, London   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|




More information about the libvir-list mailing list