[vfio-users] about vfio interrupt performance

James smilingjames at gmail.com
Tue Jun 18 11:43:39 UTC 2019


Hi Alex:

Many thanks for your detailed feedback and great helps!

1, yes, make our drive into upstream will also solve this problem :)

2, got it, checking device's some status register mapped via
vfio persistently will be a better solution compare with eventfd if
interrupt rate is high.

3, "1, Some device’s extend configuration space will have problem when
accessing by random."
It means I remeber some guy reported that when they try to access extend
configuration space via vfio framwork, sometimes they'll get access error,
not all device have this problem(it only happen to extend configuration
space), it happen rarely.
I forget the link of this issue, not sure if you have some comments to this
kind of issue, so sorry to mislead you and waster your time..

4, and to "2, When try to access the device’s space which in the same iommu
groups at the same time, it will trigger issue by random"
You mean if we can not sperate the device into different iommu groups, we'd
better not access two device which in the same groups at the same time.




Alex Williamson <alex.williamson at redhat.com> 于2019年6月18日周二 上午1:53写道:

> On Mon, 17 Jun 2019 16:00:42 +0800
> James <smilingjames at gmail.com> wrote:
>
> > Hi Experts:
> >
> > Sorry to disturb you.
> >
> >
> >
> > I failed to find any valid data about vfio interrupt performance in
> > community, so send mail to you boldly.
> >
> >
> >
> > We have a pcie device work on x86 platform, and no VM in our env,  I plan
> > to replace the kernel side device driver with vfio framework, reimplement
> > it in user space after enable vfio/vfio_pci/vfio_iommu_type1 in kernel.
> The
> > original intention is just to get rid of the dependents to kernel, let
> our
> > application which need to access our pcie device to be a pure
> application,
> > let it can run on other linux distribution(no custom kernel driver need).
>
> Wouldn't getting your driver upstream also solve some of these issues?
>
> > Our pcie device have the following character:
> >
> > 1, have a great deal of interrupt when working
> >
> > 2, and also have high demand to interrupt’s processing speed.
>
> There will be more interrupt latency for a vfio userspace driver, the
> interrupt is received on the host and signaled to the user via an
> eventfd.  Hardware accelerators like APICv and Posted Interrupts are
> not available outside of a VM context.  Whether the overhead is
> acceptable is something you'll need to determine.  It may be beneficial
> to switch to polling mode at high interrupt rate as network devices
> tend to do.  DPDK is a userspace driver that makes use of vfio for
> device access, but typically uses polling rather that interrupt driven
> data transfer AIUI.
>
> > 3, it will need to access almost all bar space after mapping.
>
> This is not an issue.
>
> > Here want to check with you, compare with previous kernel side device
> > driver, if there are huge decrease for interrupt’s processing speed when
> > the interrupt numbers are huge in short time?
> >
> > How about your comments to my this attemptation, if it’s valueble to move
> > driver to userspace in this kind of situation(no vm, huge interrupt
> numbers
> > etc..).
>
> The description implies you're trying to avoid open sourcing your
> device driver by moving it to a userspace driver.  While I'd rather run
> an untrusted driver in vfio as a userspace driver, this potentially
> makes it inaccessible to users where the hardware or lack of isolation
> provided by the platform prevent them from making use of your device.
>
> > BTW, I found there are some random issue when using vfio in community,
> such
> > as:
> >
> > 1, Some device’s extend configuration space will have problem when
> > accessing by random.
> >
> > 2, When try to access the device’s space which in the same iommu groups
> at
> > the same time, it will trigger issue by random.
> >
> >
> >
> > If this kind of issue have relation with IOMMU’s hardware limitation, or
> if
> > we can bypass it via some method for now?
>
> The questions are not well worded to understand the issues you're
> trying to note here.  Some portions of config space are emulated or
> virtualized by the vfio kernel driver, some by QEMU.  Since you won't
> be using QEMU, you don't have the latter.  The QEMU machine type and
> VM PCI topology also determines the availability of extended config
> space, these are VM specific issues.  The IOMMU grouping is definitely
> an issue.  IOMMU groups cannot be shared therefore usage of the device
> might be restricted to physical configurations where IOMMU isolation is
> provided.  The ACS override patch that some people here use is not and
> will not be upstreamed, so it should not be considered as a requirement
> for the availability of your device.  Thanks,
>
> Alex
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20190618/e483159b/attachment.htm>


More information about the vfio-users mailing list