[vfio-users] Q on vfio-pci driver usage on Host

Ravi Kerur rkerur at gmail.com
Mon Apr 13 17:33:21 UTC 2020


On Mon, Apr 13, 2020 at 8:36 AM Alex Williamson <alex.williamson at redhat.com>
wrote:

> On Sun, 12 Apr 2020 09:10:49 -0700
> Ravi Kerur <rkerur at gmail.com> wrote:
>
> > Hi,
> >
> > I use Intel NICs for PF and VF devices. VFs are assigned to virtual
> > machines and PF is used on the Host. I have intel-iommu=on on GRUB which
> > enables DMAR and IOMMU capabilities (checked via 'dmesg | grep -e IOMMU
> -e
> > DMAR) and use DPDK for datapath acceleration.
> >
> > Couple of clarifications I need in terms of vfio-pci driver usage
> >
> > (1) with intel-iommu=pt (Passthrough mode), PF device on host can bind to
> > either igb_uio or vfio-pci driver and similarly VF devices assigned to
> each
> > VM can bind to either igb_uio or vfio-pci driver via Qemu
>
> Note that the actual option is 'intel_iommu=on iommu=pt'.
>

My mistake,

>
> > (2) with intel-iommu=on (IOMMU enabled), PF device on host must bind to
> > vfio-pci driver and similarly VF devices assigned to each VM much bind to
> > vfio-pci driver. When IOMMU is enabled, only vfio-pci should be used?
>
> When an IOMMU is present, we refer to the address space through which a
> device performs DMA as the I/O Virtual Address space, or IOVA.  When
> the IOMMU is in passthrough mode, we effectively create an identity
> mapping of physical addresses through the IOVA space.  Therefore to
> program a device to perform a DMA to user memory, the user only needs
> to perform a virtual to physical translation on the address and the
> device can DMA directly with that physical address thanks to the
> identity map.  When we're not in passthrough mode, we need to actually
> create a mapping through the IOMMU to allow the device to access that
> physical memory.  VFIO is the only userspace driver interface that I'm
> aware of that provides this latter functionality.  Therefore, yes, if
> you have the IOMMU enabled and not in passthrough mode, your userspace
> driver needs support for programming the IOMMU, which vfio-pci provides.
>
> Also, having both the PF and VFs owned by userspace drivers presents
> some additional risks, for example the PF may have access to the data
> accessed by the VF, or at least be able to deny service to the VF.
> There have been various hacks around this presented by the DPDK
> community, essentially enabling SR-IOV underneath vfio-pci, without the
> driver's knowledge.  These are very much dis-recommended, IMO.
> However, we have added SR-IOV support to the vfio-pci driver in kernel
> v5.7 and DPDK support is under development, which represents this trust
> and collaboration between PF and VF drivers using a new VF token
> concept.  I'd encourage you to look for this if your configuration does
> require both PF and VF drivers in userspace.  A much more normal
> configuration to this point has been that the PF makes use of a host
> driver (ex. igb, ixgbe, i40e, etc.) while the VF is bound to vfio-pci
> for userspace drivers.  In this configuration the host driver is
> considered trusted and we don't need to invent new mechanisms to
> indicate collaboration between userspace drivers.  Thanks,
>

Thanks for the information. Clearly understand what I need to do. Where can
I find information on vfio-pci sr-iov support (writeup/design)?

Thanks,
Ravi


>
> Alex
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20200413/a36b72f7/attachment.htm>


More information about the vfio-users mailing list