[vfio-users] Device passthrough using pci-assign

sL1pKn07 SpinFlo sl1pkn07 at gmail.com
Wed Oct 18 22:59:25 UTC 2017


El 18 oct. 2017 8:53 p. m., "Alex Williamson" <alex.williamson at redhat.com>
escribió


As of kernel v4.12, Legacy KVM device assignment has been removed from
the kernel (it had previously been deprecated), so I suggest to start
with migrating your configuration to using vfio-pci instead.  You can
find vfio documentation in the Linux kernel tree:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/
linux.git/tree/Documentation/vfio.txt

In my blog:

http://vfio.blogspot.com/

And various other places.

Regarding interrupt behavior, without additional hardware assistance,
assigned device interrupts are handled by the host kernel and injected
into the guest.  This does not generally incur much load on the host
since the interrupt is simply forwarded and not serviced at the device
in the host.  Usually the problem here is the additional latency, which
is one of the more notable overheads when using device assignment.

Various processor offloads can help to alleviate this.  Intel first
introduced APICv, which among other things, allows interrupts to be
injected into the VM via IPI without incurring a vm-exit.  To take
advantage of this, you'd want a) hardware supporting APICv and b)
redirect the physical interrupt to a host processor not running a vCPU
such that the interrupt itself doesn't trigger a world switch.  The
kvm_intel module has an enable_apicv option, if you can't enable it, as
seen in /sys/module/kvm_intel/parameters/enable_apicv, then your
hardware doesn't support it.

Newer yet Intel hardware supports posted interrupts, which allows
interrupts to be injected directly into the VM, bypassing the host
entirely.  This one is even trickier to setup AIUI the interrupt needs
to arrive to the pCPU while it's running the vCPU to which the
interrupt is directed in the VM. I'm not entirely sure how to test for
this support, but unless you're running pretty recent Xeon E5+
processors, it's probably safe to assume it's not there.


When you say recent Xeon E5+, you mean, for example Xeon E5-2650-v4
processor?


The above optimizations also assume that you're using a modern device
which supports nicely aligned MMIO mapped registers and MSI or MSI/X
interrupts.  If you're using a device that requires INTx or makes use
of I/O port registers, or interacts with PCI config space of the device
on each interrupt, or perhaps even if you're using an old OS in the VM
that flips MSI mask bits regularly, most of those actions require
hypervisor interaction and will reduce the performance and increase the
overhead of the device.  Thanks,

Alex

_______________________________________________
vfio-users mailing list
vfio-users at redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users



Greetings
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20171019/853650c4/attachment.htm>


More information about the vfio-users mailing list