[vfio-users] Use MSI or not?

Colin Godsey crgodsey at gmail.com
Sun Apr 24 14:15:51 UTC 2016


I have 2 windows instances running with GPU (nvidia gtx) passthrough via
vfio. Initially when I started, I had to enable MSI or guest performance
was absolutely horrible.

I’ve recently done the full upgrade to Ubuntu 16 including the newest 4.4
kernel, etc.

So somewhere between updating the host kernel, and updating the NVIDIA
drivers on the guest… MSI got disabled and my guests, and performance was
fine (possibly even less %sys than when using MSI).

Is there a really good reason to use MSI? I also noticed that the devices
seem to be using fasteoi emulation where it wasnt before (use to list the
devices with plain old interrupt assignment. Now both cards seem to share a
total of 2 fasteoi interrupts:

IR-IO-APIC   16-fasteoi   vfio-intx(0000:01:00.0), vfio-intx(0000:06:00.0)

  17:         53         58          0          0  IR-IO-APIC   17-fasteoi
  vfio-intx(0000:01:00.1), vfio-intx(0000:06:00.1)


The only other thing I can think of that changed is that I’m forcing the
guests to use x2apic. I’ve read that x2apic with APICv hardware support
offers the best virtual interrupt handling available in KVM/QEMU. Is it
possible that this is even better than virtual MSI?


Basically, I was wondering if somebody could explain the
relationship/differences between fastEOI/MSI/x2APIC in the context of 4.4+
linux and modern intel hardware (skylake in this case).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160424/a000c6db/attachment.htm>


More information about the vfio-users mailing list