[vfio-users] Cannot get vfio-pci to work

Alex Williamson alex.williamson at redhat.com
Wed Aug 15 18:27:51 UTC 2018


[cc +vfio-users]

On Wed, 15 Aug 2018 20:14:29 +0200
Jes Urup <jes at urup.me> wrote:

> Yes i enabled that in /etc/default/grub cmdline. I have this: iommu=on
> intel_iommu=on hugepages=1024
> When i run that command i get this:
> https://paste.pound-python.org/show/8FrWghxbcz8qWYdAh8wl/
> 
> I want 0000:03:00.0 and 0000:04:00.0 which are intel nic's to use vfio-pci.
> 
> I've also added all the configs to the kernel.
> 
> When i run virt-manager i cannot see the device that ive set to use
> vfio-pci driver. And i cannot find anything in dmesg.
> I also have this: options vfio-pci ids=8086:10d3 in
> /etc/modprobe.d/vfio.conf

I don't understand what you're saying here, virt-manager doesn't care
whether devices are initially bound to vfio-pci, "Add Hardware -> PCI
Host Device" shows every PCI device in the system.  If you pick 3:00.0
and 4:00.0, then libvirt will default to managed='yes' which means that
libvirt will bind the device to vfio-pci for you when the VM is started
and unbind when stopped.  It's not absolutely necessary to pre-bind
devices to vfio-pci, it's only recommended if you only plan to use the
device with vfio-pci or for special cases like GPUs where host drivers
don't always behave well with unbinding or put the device into a bad
state.

Setting an ids= option for vfio-pci only adds the ID to the vfio-pci
driver, it will bind to devices matching that ID when it's loaded, but
only if matching devices are not already bound to another driver.  It
won't cause the e1000e driver to release devices it has already
claimed.  Since you're using libvirt, you can also use:

# virsh nodedev-detach pci_0000_03_00_0
# virsh nodedev-detach pci_0000_04_00_0

to bind the devices to vfio-pci.  Thanks,

Alex




More information about the vfio-users mailing list