[vfio-users] VFIO-PCI with AARCH64 QEMU

Haynal, Steve Steve_Haynal at mentor.com
Wed Oct 26 00:16:21 UTC 2016


Hi All,

I can enable the memory region with the "setpci -s 00:09.0 COMMAND=2:2" command. For proof of concept tests, I can get by with a shared memory size of 8MB, which should fit. I can also switch to 64-bit BARs. Both of these changes require resynthesizing the FPGA design overnight and may cause other problems, so I will report back if it works tomorrow.

I am using the stock default kernel in the current Xenial aarch64 cloud image from Ubuntu (4.4.0-45). I will build a newer kernel.

I also prefer the enumeration and standardization of a PCI device over a platform device, but some of our customers want the virtual environment to more closely match their final hardware target environment. I will take a look at ivshmem. 

Thanks again to all for the help.
 
Best Regards,

Steve Haynal


-----Original Message-----
From: Laszlo Ersek [mailto:lersek at redhat.com] 
Sent: Tuesday, October 25, 2016 3:01 PM
To: Ard Biesheuvel; Haynal, Steve
Cc: Alex Williamson; vfio-users at redhat.com; Eric Auger
Subject: Re: [vfio-users] VFIO-PCI with AARCH64 QEMU

On 10/25/16 23:10, Ard Biesheuvel wrote:
> On 25 October 2016 at 21:38, Haynal, Steve <Steve_Haynal at mentor.com> wrote:
>> Hi All,
>>
>> Thanks for the help. I've started using explicit pflash drives 
>> instead of -bios. The firmware I was using was 15.12 from 
>> https://releases.linaro.org/components/kernel/uefi-linaro/15.12/release/qemu64/QEMU_EFI.fd.
>> This was not producing any interesting debug output, so I built my 
>> own from git following these instructions 
>> https://wiki.linaro.org/LEG/UEFIforQEMU . This produces the output 
>> shown below. Once booted, the lspci output still looks the same as 
>> before. If I add acpi=force during boot or compile with -D 
>> PURE_ACPI_BOOT_ENABLE, the boot always hangs at the line " EFI
>> stub: Exiting boot services and installing virtual address map..."
>> Boot completes without these options. Any ideas on why the memory 
>> regions show up as disabled in lspci, and why the large 512MB region 
>> is ignored?
>>
>> The 512MB memory region is quite a bit to reserve. We have Google's 
>> BigE hardware IP (see
>> https://www.webmproject.org/hardware/vp9/bige/) running on an FPGA.
>> This IP shares memory with the host and currently Google's driver 
>> allocates memory from this 512MB region when it must be shared 
>> between the application and IP on the FPGA. We want to test this IP 
>> on a virtual aarch64 platform and hence the device pass through and 
>> interest in vfio. Eventually, we'd like these passed through memory 
>> regions to appear as platform devices. Is it possible/recommended to 
>> hack the vfio infrastructure such that a PCI device on the host side 
>> appears as a platform device in an aarch64 Qemu machine? We've done 
>> something similar with virtual device drivers. Should we stick with 
>> virtual device drivers?
>>
> 
> While informative, the way the firmware handles the PCI resource 
> allocation is not highly relevant, given that you're not booting from 
> the device, and on arm64, the kernel will reallocate all PCI resources 
> anyway.

It was me that asked for the firmware log. I had known (from you) about the arm64 kernel reallocating PCI resources unconditionally, but I wanted to see if the firmware encountered the same symptoms.

It did, apparently. Had it not, that would have implied a problem with the guest kernel. (This is what I was trying to discern.)

> 
> The relevant bit from the kernel log is
> 
> [   62.992533] pci 0000:00:09.0: BAR 1: no space for [mem size 0x20000000]
> [   62.992669] pci 0000:00:09.0: BAR 1: failed to assign [mem size 0x20000000]
> 
> The 32-bit window for MMIO BAR allocation is simply not large enough 
> to allocate both BARs in a way that adheres to all range and alignment 
> requirements. It looks as if on arm64, the BARs are not sorted by 
> size/alignment,

I think I disagree.

While I only assume that the arm64 kernel does the same sorting+grouping in decreasing alignment order as the x86_64 kernel does, I know for a fact that the edk2 PCI Bus driver does the sorting+grouping regardless of architecture. The cause is different; namely:

The 32-bit MMIO aperture exposed by "qemu-system-aarch64 -M virt" is:

  [0x1000_0000, 0x3EFE_FFFF] == [ 256 MB, 1007 MB + 960 KB )

In this range, the only base address that satisfies the 512MB alignment is 512MB itself, i.e., 0x2000_0000. And, from this base address up, there isn't enough room left in the aperture for the 512MB size of the BAR (there's only 495 MB + 960 KB free).

> but are simply allocated in order, which means naturally aligning the 
> 512 MB wastes ~512 MB of 32-bit MMIO space, leaving insufficient 
> space. On x86, the BARs are apparently allocated in a saner order.

I agree there isn't enough room left, but it's not due to lack of sorting. The reason is that, given the specific boundaries of the 32-bit MMIO aperture, the largest BAR that can be accommodated is 256MB in size.

Thanks
Laszlo




More information about the vfio-users mailing list