[libvirt] [Qemu-devel] [PATCH v7 0/4] Add Mediated device support

Tian, Kevin kevin.tian at intel.com
Wed Sep 7 06:48:25 UTC 2016


> From: Alex Williamson [mailto:alex.williamson at redhat.com]
> Sent: Wednesday, September 07, 2016 1:41 AM
> 
> On Sat, 3 Sep 2016 22:04:56 +0530
> Kirti Wankhede <kwankhede at nvidia.com> wrote:
> 
> > On 9/3/2016 3:18 AM, Paolo Bonzini wrote:
> > >
> > >
> > > On 02/09/2016 20:33, Kirti Wankhede wrote:
> > >> <Alex> We could even do:
> > >>>>
> > >>>> echo $UUID1:$GROUPA > create
> > >>>>
> > >>>> where $GROUPA is the group ID of a previously created mdev device into
> > >>>> which $UUID1 is to be created and added to the same group.
> > >> </Alex>
> > >
> > > From the point of view of libvirt, I think I prefer Alex's idea.
> > > <group> could be an additional element in the nodedev-create XML:
> > >
> > >     <device>
> > >       <name>my-vgpu</name>
> > >       <parent>pci_0000_86_00_0</parent>
> > >       <capability type='mdev'>
> > >         <type id='11'/>
> > >         <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
> > >         <group>group1</group>
> > >       </capability>
> > >     </device>
> > >
> > > (should group also be a UUID?)
> > >
> >
> > No, this should be a unique number in a system, similar to iommu_group.
> 
> Sorry, just trying to catch up on this thread after a long weekend.
> 
> We're talking about iommu groups here, we're not creating any sort of
> parallel grouping specific to mdev devices.  This is why my example
> created a device and then required the user to go find the group number
> given to that device in order to create another device within the same
> group.  iommu group numbering is not within the user's control and is
> not a uuid.  libvirt can refer to the group as anything it wants in the
> xml, but the host group number is allocated by the host, not under user
> control, is not persistent.  libvirt would just be giving it a name to
> know which devices are part of the same group.  Perhaps the runtime xml
> would fill in the group number once created.
> 
> There were also a lot of unanswered questions in my proposal, it's not
> clear that there's a standard algorithm for when mdev devices need to
> be grouped together.  Should we even allow groups to span multiple host
> devices?  Should they be allowed to span devices from different
> vendors?

I think we should limit the scope of iommu group for mdev here, which 
better only contains mdev belonging to same parent device. Spanning
multiple host devices (regardless of whether from different vendors)
are grouped based on physical isolation granularity. Better not to mix
two levels together. I'm not sure whether NVIDIA has requirement to
start all vGPUs together even when they come from different parent
devices. Hope not...

> 
> If we imagine a scenario of a group composed of a mix of Intel and
> NVIDIA vGPUs, what happens when an Intel device is opened first?  The
> NVIDIA driver wouldn't know about this, but it would know when the
> first NVIDIA device is opened and be able to establish p2p for the
> NVIDIA devices at that point.  Can we do what we need with that model?
> What if libvirt is asked to hot-add an NVIDIA vGPU?  It would need to
> do a create on the NVIDIA parent device with the existing group id, at
> which point the NVIDIA vendor driver could fail the device create if
> the p2p setup has already been done.  The Intel vendor driver might
> allow it.  Similar to open, the last close of the mdev device for a
> given vendor (which might not be the last close of mdev devices within
> the group) would need to trigger the offline process for that vendor.

I assume iommu group is for minimal isolation granularity. In higher
level we have VFIO container which could deliver both Intel vGPUs
and NVIDIA vGPUs to the same VM. Intel vGPUs each have its own
iommu group, while NVIDIA vGPUs of the same parent device may
be in one group.

Thanks
Kevin




More information about the libvir-list mailing list