[libvirt] [PATCH 0/8] logically memory hotplug via guest agent

Peter Krempa pkrempa at redhat.com
Tue Jun 9 11:22:49 UTC 2015


On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
> On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
> > Logically memory hotplug via guest agent, by enabling/disabling memory blocks.
> > The corresponding qga commands are: 'guest-get-memory-blocks',
> > 'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
> > 
> > detailed flow:
> >     1 get memory block list, each member has 'phy-index', 'online' and 'can-offline' parameters
> >     2 get memory block size, normally 128MB or 256MB for most OSes
> >     3 convert the target memory size to memory block number, and see if there's enough memory
> >       blocks to be set online/offline.
> >     4 update the memory block list info, and let guest agent to set memory blocks online/offline.
> > 
> > 
> > Note that because we hotplug memory logically by online/offline MEMORY BLOCKS,
> > and each memory block has a size much bigger than KiB, there's a deviation
> > with the range of (0, block_size). block_size may be 128MB or 256MB or etc.,
> > it differs on different OSes.
> 
> So thre's alot of questions about this feature that are unclear to me..
> 
> This appears to be entirely operating via guest agent commands. How
> does this then correspond to increased/decreased allocation in the host
> side QEMU ? What are the upper/lower bounds on adding/removing blocks.
> eg what prevents a malicous guest from asking for more memory to be
> added too itself than we wish to allow ? How is this better / worse than
> adjusting memory via the balloon driver ? How does this relate to the

There are two possibilities where this could be advantageous:

1) This could be better than ballooning (given that it would actually
return the memory to the host, which it doesn't) since you probably
will be able to disable memory regions in certain NUMA nodes which is
not possible with the current balloon driver (memory is taken randomly).

2) The guest OS sometimes needs to enable the memory region after ACPI
memory hotplug. The GA would be able to online such memory. For this
option we don't need to go through a different API though since it can
be compounded using a flag.

> recently added DIMM hot add/remove feature on the host side, if at all ?
> Are the changes made synchronously or asynchronously - ie does the
> API block while the guest OS releases the memory from the blocks that
> re released, or is it totally in the backgrond like the balloon driver..
> 
> On a design POV, we're reusing the existing virDomainSetMemory API but
> adding a restriction that it has to be in multiples of the block size,
> which the mgmt app has no way of knowing upfront. It feels like this is
> information we need to be able to expose to the app in some manner.

Since this feature would not actually release any host resources in
contrast with agent based vCPU unplug I don't think it's worth exposing
the memory region manipulation APIs via libvirt.

Only sane way I can think of is to use it to enable the memory regions
after hotplug.

Peter
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20150609/91e94f34/attachment-0001.sig>


More information about the libvir-list mailing list