[vfio-users] about vfio interrupt performance

Alex Williamson alex.williamson at redhat.com
Mon Jun 17 17:53:44 UTC 2019


On Mon, 17 Jun 2019 16:00:42 +0800
James <smilingjames at gmail.com> wrote:

> Hi Experts:
> 
> Sorry to disturb you.
> 
> 
> 
> I failed to find any valid data about vfio interrupt performance in
> community, so send mail to you boldly.
> 
> 
> 
> We have a pcie device work on x86 platform, and no VM in our env,  I plan
> to replace the kernel side device driver with vfio framework, reimplement
> it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel. The
> original intention is just to get rid of the dependents to kernel, let our
> application which need to access our pcie device to be a pure application,
> let it can run on other linux distribution(no custom kernel driver need).

Wouldn't getting your driver upstream also solve some of these issues?

> Our pcie device have the following character:
> 
> 1, have a great deal of interrupt when working
> 
> 2, and also have high demand to interrupt’s processing speed.

There will be more interrupt latency for a vfio userspace driver, the
interrupt is received on the host and signaled to the user via an
eventfd.  Hardware accelerators like APICv and Posted Interrupts are
not available outside of a VM context.  Whether the overhead is
acceptable is something you'll need to determine.  It may be beneficial
to switch to polling mode at high interrupt rate as network devices
tend to do.  DPDK is a userspace driver that makes use of vfio for
device access, but typically uses polling rather that interrupt driven
data transfer AIUI.
 
> 3, it will need to access almost all bar space after mapping.

This is not an issue.

> Here want to check with you, compare with previous kernel side device
> driver, if there are huge decrease for interrupt’s processing speed when
> the interrupt numbers are huge in short time?
> 
> How about your comments to my this attemptation, if it’s valueble to move
> driver to userspace in this kind of situation(no vm, huge interrupt numbers
> etc..).

The description implies you're trying to avoid open sourcing your
device driver by moving it to a userspace driver.  While I'd rather run
an untrusted driver in vfio as a userspace driver, this potentially
makes it inaccessible to users where the hardware or lack of isolation
provided by the platform prevent them from making use of your device.

> BTW, I found there are some random issue when using vfio in community, such
> as:
> 
> 1, Some device’s extend configuration space will have problem when
> accessing by random.
> 
> 2, When try to access the device’s space which in the same iommu groups at
> the same time, it will trigger issue by random.
> 
> 
> 
> If this kind of issue have relation with IOMMU’s hardware limitation, or if
> we can bypass it via some method for now?

The questions are not well worded to understand the issues you're
trying to note here.  Some portions of config space are emulated or
virtualized by the vfio kernel driver, some by QEMU.  Since you won't
be using QEMU, you don't have the latter.  The QEMU machine type and
VM PCI topology also determines the availability of extended config
space, these are VM specific issues.  The IOMMU grouping is definitely
an issue.  IOMMU groups cannot be shared therefore usage of the device
might be restricted to physical configurations where IOMMU isolation is
provided.  The ACS override patch that some people here use is not and
will not be upstreamed, so it should not be considered as a requirement
for the availability of your device.  Thanks,

Alex




More information about the vfio-users mailing list