[libvirt] Analysis of the effect of adding PCIe root ports

Daniel P. Berrange berrange at redhat.com
Thu Oct 6 16:10:42 UTC 2016


On Thu, Oct 06, 2016 at 11:57:17AM -0400, Laine Stump wrote:
> On 10/06/2016 11:31 AM, Daniel P. Berrange wrote:
> > On Thu, Oct 06, 2016 at 12:58:51PM +0200, Andrea Bolognani wrote:
> > > On Wed, 2016-10-05 at 18:36 +0100, Richard W.M. Jones wrote:
> > > > > > (b) It would be nice to turn the whole thing off for people who don't
> > > > > > care about / need hotplugging.
> > > > > I had contemplated having an "availablePCIeSlots" (or something like
> > > > > that) that was either an attribute of the config, or an option in
> > > > > qemu.conf or libvirtd.conf. If we had such a setting, it could be
> > > > > set to "0".
> > > I remember some pushback when this was proposed. Maybe we
> > > should just give up on the idea of providing spare
> > > hotpluggable PCIe slots by default and ask the user to add
> > > them explicitly after all.
> > > 
> > > > Note that changes to libvirt conf files are not usable by libguestfs.
> > > > The setting would need to go into the XML, and please also make it
> > > > possible to determine if $random version of libvirt supports the
> > > > setting, either by a version check or something in capabilities.
> > > Note that you can avoid using any PCIe root port at all by
> > > assigning PCI addresses manually. It looks like the overhead
> > > for the small (I'm assuming) number of devices a libguestfs
> > > appliance will use is low enough that you will probably not
> > > want to open that can of worm, though.
> > For most apps the performance impact of the PCI enumeration
> > is not a big deal. So having libvirt ensure there's enough
> > available hotpluggable PCIe slots is reasonable, as long as
> > we leave a get-out clause for libguestfs.
> > 
> > This could be as simple as declaring that *if* we see one
> > or more <controller type="pci"> in the input XML, then libvirt
> > will honour those and not try to add new controllers to the
> > guest.
> > 
> > That way, by default libvirt will just "do the right thing"
> > and auto-create a suitable number of controllers needed to
> > boot the guest.
> > 
> > Apps that want strict control though, can specify the
> > <controllers> elements themselves.  Libvirt can still
> > auto-assign device addresses onto these controllers.
> > It simply wouldn't add any further controllers itself
> > at that point. NB I'm talking cold-boot here. So libguestfs
> > would specify <controller> itself to the minimal set it wants
> > to optimize its boot performance.
> 
> That works for the initial definition of the domain, but as soon as you've
> saved it once, there will be controllers explicitly in the config, and since
> we don't have any way of differentiating between auto-added controllers and
> those specifically requested by the user, we have to assume they were
> explicitly added, so such a check is then meaningless because you will
> *always* have PCI controllers.

Ok, so coldplug was probably the wrong word to use. What I actually
meant was "at time of initial define", since that's when libvirt
actually does its controller auto-creation. If you later add more
devices to the guest, whether it is online or offline, that libvirt
would still be auto-adding more controllers if required (and if
possible) . I was not expecting libvirt to remember whether we
were auto-adding controllers the first time or not.

> Say you create a domain definition with no controllers, you would get enough
> for the devices in the initial config, plus "N" more empty root ports. Let's
> say you then add 4 more devices (either hotplug or coldplug, doesn't
> matter). Those devices are placed on the existing unused pcie-root-ports.
> But now all your ports are full, and since you have PCI controllers in the
> config, libvirt is going to say "Ah, this user knows what they want to do,
> so I'm not going to add any extras! I'm so smart!". This would be especially
> maddening in the case of "coldplug", where libvirt could have easily added a
> new controller to accomodate the new device, but didn't.
> 
> Unless we don't care what happens after the initial definition (and then
> adding of "N" new devices), trying to behave properly purely based on
> whether or not there are any PCI controllers present in the config isn't
> going to work.

I think that's fine.

Lets stop talking about coldplug since that's very misleading.

What I mean is that...

1. When initially defining a guest

   If no controllers are present, auto-add controllers implied
   by the machine type, sufficient to deal with all currently
   listed devices, plus "N" extra spare ports.

   Else, simply assign devices to the controllers listed in
   the XML config. If there are no extra spare ports after
   doing this, so be it. It was the application's choice
   to have not listed enough controllers to allow later
   addition of more devices.


2. When adding further devices (whether to an offline or online
   guest)

   If there's not enough slots left, add further controllers
   to host the devices.  If there were not enough slots left
   to allow adding further controllers, that must be due to
   the initial application decision at time of defining the
   original XML



Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|




More information about the libvir-list mailing list