[libvirt] How should libvirt apps enable virtio-pci for aarch64?

Pavel Fedin p.fedin at samsung.com
Wed Dec 9 10:35:37 UTC 2015


 Hello!

> >   I don't remember the exact outcome. So, i decided to use addPCIeRoot instead, and
> everything just worked.
> 
> Except the extra stuff. The origin of the problem was in my overloading
> "addPCIeRoot" to indicate that the other two controllers should also be
> added. You only made the mistake of thinking that the name of the
> variable was actually an accurate/complete description of what it did :-)

 No, it was fine, and i don't consider it a mistake. All i needed is to tell libvirt somehow that the machine has PCIe, and that did
the job perfectly. I knew that it adds two more devices for some reason, but decided just to leave it as it is, because i assumed
that there is a reason for them to be there, just didn't care what the reason exactly is. I am questioning it only now.

> But apparently qemu is accepting "-device i82801b11-bridge" and -device
> pci-bridge, since that's what you get from libvirt. If these devices
> aren't supported for aarch64, they should be disabled in qemu (and not
> listed in the devices when libvirt asks).

 It's actually supported, and everything just works. Here is what i get in the machine:
--- cut ---
[root at localhost ~]# lspci -v
00:00.0 Host bridge: Red Hat, Inc. Device 0008
	Subsystem: Red Hat, Inc Device 1100
	Flags: fast devsel
lspci: Unable to load libkmod resources: error -12

00:01.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 92) (prog-if 01 [Subtractive decode])
	Flags: 66MHz, fast devsel
	Bus: primary=00, secondary=01, subordinate=02, sec-latency=0
	Memory behind bridge: 10000000-100fffff
	Capabilities: [50] Subsystem: Device 0000:0000

00:02.0 SCSI storage controller: Red Hat, Inc Virtio SCSI
	Subsystem: Red Hat, Inc Device 0008
	Flags: bus master, fast devsel, latency 0, IRQ 39
	I/O ports at 1000 [size=64]
	Memory at 10140000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable+ Count=4 Masked-
	Kernel driver in use: virtio-pci

00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
	Subsystem: Red Hat, Inc Device 0001
	Flags: bus master, fast devsel, latency 0, IRQ 40
	I/O ports at 1040 [size=32]
	Memory at 10141000 (32-bit, non-prefetchable) [size=4K]
	[virtual] Expansion ROM at 10100000 [disabled] [size=256K]
	Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
	Kernel driver in use: virtio-pci

00:04.0 Ethernet controller: Cavium, Inc. Device 0011
	Subsystem: Cavium, Inc. Device a11e
	Flags: bus master, fast devsel, latency 0
	Memory at 8000000000 (64-bit, non-prefetchable) [size=2M]
	Memory at 8000200000 (64-bit, non-prefetchable) [size=2M]
	Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00
	Capabilities: [b0] MSI-X: Enable+ Count=20 Masked-
	Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
	Kernel driver in use: thunder-nicvf

01:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode])
	Flags: 66MHz, fast devsel
	Memory at 10000000 (64-bit, non-prefetchable) [disabled] [size=256]
	Bus: primary=01, secondary=02, subordinate=02, sec-latency=0
	Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+
	Capabilities: [48] Slot ID: 0 slots, First+, chassis 02
	Capabilities: [40] Hot-plug capable
--- cut ---

00:04:0 is VFIO passthrough, and i put everything to bus#0 by hands, for MSI-X to work. I could also leave devices at bus#2, just in
this case i don't get MSI-X, and this, for example, makes vhost-net unable to use irqfds (or it cannot initialize at all, again, i
already don't remember), because this requires per-event irqs.

Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia





More information about the libvir-list mailing list