[libvirt] [PATCH 0/8] logically memory hotplug via guest agent

Daniel P. Berrange berrange at redhat.com
Tue Jun 9 12:05:35 UTC 2015


On Tue, Jun 09, 2015 at 02:03:13PM +0200, Peter Krempa wrote:
> On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
> > On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
> > > On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
> > > > On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
> > > > > Logically memory hotplug via guest agent, by enabling/disabling memory blocks.
> > > > > The corresponding qga commands are: 'guest-get-memory-blocks',
> > > > > 'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
> > > > > 
> > > > > detailed flow:
> > > > >     1 get memory block list, each member has 'phy-index', 'online' and 'can-offline' parameters
> > > > >     2 get memory block size, normally 128MB or 256MB for most OSes
> > > > >     3 convert the target memory size to memory block number, and see if there's enough memory
> > > > >       blocks to be set online/offline.
> > > > >     4 update the memory block list info, and let guest agent to set memory blocks online/offline.
> > > > > 
> > > > > 
> > > > > Note that because we hotplug memory logically by online/offline MEMORY BLOCKS,
> > > > > and each memory block has a size much bigger than KiB, there's a deviation
> > > > > with the range of (0, block_size). block_size may be 128MB or 256MB or etc.,
> > > > > it differs on different OSes.
> > > > 
> > > > So thre's alot of questions about this feature that are unclear to me..
> > > > 
> > > > This appears to be entirely operating via guest agent commands. How
> > > > does this then correspond to increased/decreased allocation in the host
> > > > side QEMU ? What are the upper/lower bounds on adding/removing blocks.
> > > > eg what prevents a malicous guest from asking for more memory to be
> > > > added too itself than we wish to allow ? How is this better / worse than
> > > > adjusting memory via the balloon driver ? How does this relate to the
> > > 
> > > There are two possibilities where this could be advantageous:
> > > 
> > > 1) This could be better than ballooning (given that it would actually
> > > return the memory to the host, which it doesn't) since you probably
> > > will be able to disable memory regions in certain NUMA nodes which is
> > > not possible with the current balloon driver (memory is taken randomly).
> > > 
> > > 2) The guest OS sometimes needs to enable the memory region after ACPI
> > > memory hotplug. The GA would be able to online such memory. For this
> > > option we don't need to go through a different API though since it can
> > > be compounded using a flag.
> > 
> > So, are you saying that we should not be adding this to the
> > virDomainSetMemory API as done in this series, and we should
> > instead be able to request automatic enabling/disabling of the
> > regions when we do the original DIMM hotplug ?
> 
> Well, that's the only place where using the memory region GA apis would
> make sense for libvirt.
> 
> Whether we should do it is not that clear. Windows does online the
> regions automatically and I was told that some linux distros do it via
> udev rules.

What do we do in the case of hotunplug currently ? Are we expectig the
guest admin to have manually offlined the regions before doing hotunplug
on the host ?

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|




More information about the libvir-list mailing list