[libvirt] [PATCH v5 0/4] qemu: Allow PCI virtio on ARM "virt" machine
Laine Stump
laine at laine.org
Wed Aug 12 04:24:46 UTC 2015
On 08/11/2015 10:13 PM, Alex Williamson wrote:
> On Tue, 2015-08-11 at 19:26 -0400, Laine Stump wrote:
>> (Alex - I cc'ed you because I addressed a question or two your way down
>> towards the bottom).
>>
>> On 08/11/2015 02:52 AM, Pavel Fedin wrote:
>>> Hello!
>>>
>>>> The original patches to support pcie-root severely restricted what could
>>>> plug into what because in real hardware you can't plug a PCI device into
>>>> a PCIe slot (physically it doesn't work)
>>> But how do you know whether the device is PCI or PCIe ? I don't see anything like this in the code, i see that for example "all network cards are PCI", which is, BTW, not true in the real world.
>> Two years ago when I first added support for q35-based machinetypes and
>> the first pcie controllers, I had less information than I do now. When
>> I looked in the ouput of "qemu-kvm -device ?" I saw that each device
>> listed the type of bus it connected to (PCI or ISA), and assumed that
>> even though at the time qemu didn't differentiate between PCI and PCIe
>> there, since the two things *are* different in the real world eventually
>> they likely would. I wanted the libvirt code to be prepared for that
>> eventuality. Of course every example device (except the PCIe controllers
>> themselves) ends up with the flag saying that it can connect to a PCI
>> bus, not PCIe).
>>
>> Later I was told that, unlike the real world where, if nothing else, the
>> physical slots themselves limit you, any normal PCI device in qemu could
>> be plugged into a PCI or PCIe slot. There still are several restrictions
>> though, which showed themselves as more complicated than just the naive
>> PCI vs PCIe that I originally imagined - just look at the restrictions
>> on the different PCIe controllers:
>>
>> ("pcie-sw-up-port" == "pcie-switch-upstream-port", "pcie-sw-dn-port" ==
>> "pcie-switch-downstream-port")
>>
>> name upstream downstream
>> ----------------- ----------------- -------------------
>> pcie-root none any endpoint
>> pcie-root-port
>> dmi-to-pci-bridge
>> pci-bridge
>> 31 ports NO hotplug
>>
>> dmi-to-pci-bridge pcie-root any endpoint device
>> pcie-root-port pcie-sw-up-port
>> pcie-sw-dn-port
>> NO hotplug 32 ports NO hotplug
> Hmm, pcie-sw-up-port on the downstream is a stretch here. pci-bridge
> should be allowed downstream though.
You're right, I messed up the chart. pcie-sw-up-port can only plug into
pcie-root-port or pcie-sw-dn-port. And I forgot to add in pci-bridge.
Of course my main objective was to graphically point out that you can't
just plug "anything" into "anything" :-)
>> pcie-root-port pcie-root-only any endpoint
>> NO hotplug dmi-to-pci-bridge
>> pcie-sw-up-port
>> 1 port hotpluggable
>>
>> pcie-sw-up-port pcie-root-port pcie-sw-dn-port
>> pcie-sw-dn-port 32 ports "kind of" hotpluggable
>> "kind of" hotpluggable
>>
>> pcie-sw-dn-port pcie-sw-up-port any endpoint
>> "kind of" hotplug pcie-sw-up-port
>> 1 port hotpluggable
>>
>> pci-bridge pci-root any endpoint
>> pcie-root pci-bridge
>> dmi-to-pci-bridge 32 ports hotpluggable
>> pcie-root-port
>> pcie-sw-dn-port
>> NO hotplug (now)
>>
>> So the original restrictions I placed on what could plugin where were
>> *too* restrictive for endpoint devices, but other restrictions were
>> useful, and the framework came in handy as I learned the restrictions of
>> each new pci controller model.
> System software ends up being pretty amiable as well since PCIe is
> software compatible with conventional PCI. If we have a guest-based
> IOMMU though, things could start to get interesting because the
> difference isn't so transparent. The kernel starts to care about
> whether a device is express and expects certain compatible upstream
> devices as it walks the topology. Thankfully though real hardware gets
> plenty wrong too, so we only have to be not substantially worse than
> real hardware ;)
>
More information about the libvir-list
mailing list