[libvirt] Globally Reserve Resources for Host

Daniel P. Berrange berrange at redhat.com
Wed Nov 28 20:26:53 UTC 2012

On Wed, Nov 28, 2012 at 02:27:53PM -0500, Dusty Mabe wrote:
> On Wed, Nov 14, 2012 at 11:22 AM, Dusty Mabe <dustymabe at gmail.com> wrote:
> > On Thu, Nov 1, 2012 at 11:32 PM, Dusty Mabe <dustymabe at gmail.com> wrote:
> >
> >
> > I think i have a very minimal implementation of what I proposed in my
> > original email ("reserving resources for host"). It is not quite as
> > featureful as what you discussed with danpb
> > (https://www.redhat.com/archives/libvir-list/2011-March/msg01546.html),
> > but it is a small amount of work and should be worth the effort.
> >
> > As of right now this is specifically for qemu - Basically the idea is
> > that right after the cgroup gets created for the qemu driver we will
> > set memory and cpu restrictions for the group like so:
> >
> > from qemu_driver.c:
> > rc = virCgroupForDriver("qemu", &qemu_driver->cgroup, privileged, 1);
> > rc = virCgroupSetMemory(&qemu_driver->cgroup, availableMem);
> > rc = virCgroupSetCpus(&qemu_driver->cgroup, availableCpus);
> >
> > The user will provide values in qemu.conf for "reservedHostMem" and
> > "reservedHostCpus" and then availableMem and availableCpus would be
> > calculated from that. If no values were provided in the conf then
> > simply act as normal and don't enforce any "restrictions".
> >
> > We may also want to expose this "setting" in virsh so that we could
> > change the value once up and running.
> >
> >
> > Does this seem trivial to implement as I suggest? Are there any flaws
> > with this idea?
> Hey Hey,
> Just thought i would ping on this thread to see if anyone had any
> input. I may try to code up a minimal implementation and send a patch
> if anyone thinks that would be useful in evaluating this feature.

Sorry for not replying before. I've been thinking about this today
and am specifically wondering about the possible implications for
a change we need to make to cgroups setup in libvirt

The core issue is that it has become apparent that nesting cgroups
can cause some very significant performance / scalability problems
for the kernel. They have requested / recommended that libvirt make
its cgroup hierarchy as flat as possible to avoid this problem.

Libvirt currently creates a hierarchy 3 to 4 levels deep below the
cgroup that libvirtd itself is placed in.

  $ROOT/$LIBVIRTD/libvirt/$DRIVERNAME/$VMNAME/{vcpu$VCPUNUM or emulator}

eg with systemd you might get


The second level is clearly redundant if systemd is already placing
libvirtd in a private cgroup. The third and fourth levels could be
optionally combined into '$DRIVERNAME-$VMNAME'. The last levels must
remain unchanged. This would result in examples


We want this change to apply out of the box.

In fact, my expectation is that we'll not actually hardcode this
layout, but instead introduce some level of configurabilty to
things, by having a 'cgroup_layout' config param for qemu.conf

 1. Match current hardcoded layout:


 2. Remove first level when used with systemd:


 3. Combine 3rd/4th levels too


 4. Ignore current libvirtd placement completely and create in the root:


 5. Use UUID instead of VM name


I'm sure you've noticed that this plan doesn't leave much scope for the
approach you outlined above, since the 'qemu' level will disappear by

On the plus side though, since we will have the flexibility to put
VMs in a cgroup that is unrelated to the libvirtd cgroup, (options
4/5 above), you will be able to isolate VMs from the host that way,
without needing explicit libvirt support.

It is on my plate to get these changes done for Fedora 19 / RHEL-7

|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

More information about the libvir-list mailing list