[vfio-users] Boot using second GPU?

Rokas Kupstys rokups at zoho.com
Fri Aug 5 08:22:15 UTC 2016


Okay this is unexpected luck. After more tinkering i got it to work!
Here is my setup:

  * AMD FX-8350 CPU + Sabertooth 990FX R2 motherboard
  * 0000:01:00.0 - gpu in first slot
  * 0000:06:00.0 - gpu in third slot
  * UEFI on host and guest.
  * Archlinux

In order to make host use non-boot GPU:

1. Add Kernel boot parameter "video=efifb:off". This makes kernel not
use first gpu and boot messages appear on second gpu.

2. Bind first gpu (0000:01:00.0) to vfio-pci driver. I did this by
adding line

> options vfio-pci         ids=*1002:677b*,*1002:aa98*
to /etc/modprobe.d/kvm.conf. They are obtained from "lspci -n" which in
my case show:

> 01:00.0 0300: *1002:677b*
> 01:00.1 0403: *1002:aa98*
3. Configure xorg to use second gpu (*0000:06:00.0*). I added file
/etc/X11/xorg.conf.d/secondary-gpu.conf with contents:

> Section "Device"
>     Identifier     "Device0"
>     Driver         "radeon"
>     VendorName     "AMD Corporation"
>     BoardName      "AMD Secondary"
>     BusID          "*PCI:6:0:0*"
> EndSection
And thats it! Now when machine boots it shows POST messages and
bootloader on first gpu, but as soon as boot option is selected display
goes blank and kernel boot messages show on second gpu. After boot you
can assign first gpu to VM as usual and it works.

***Help request: *could someone with intel hardware (ideally x99
chipset) test this method? I am planning a build and if this works i
could settle with 28 lane cpu and save couple hundred dollars. Intel's
40 lane cpus are way overpriced.. And with 28 lane cpus only first slot
can run at x16 speed while other slots downgrade to x8 or less. Anyhow i
would love to hear if this works on intel hardware.

Rokas Kupstys

On 2016.08.05 10:34, Rokas Kupstys wrote:
> I think i got half-way there.. My primary gpu is at 0000:01:00.0 and
> secondary on 0000:06:00.0. I used following xorg config:
>
> Section "Device"
>     Identifier     "Device0"
>     Driver         "radeon"
>     VendorName     "AMD Corporation"
>     BoardName      "AMD Secondary"
>     BusID          "PCI:6:0:0"
> EndSection
>
> After booting 0000:06:00.0 was still bound to vfio-pci (im yet to sort
> it out why as i removed modprobe configs and kernel parameters) and i
> ran following script to bind gpu to correct driver:
>
> #!/bin/bash
>
> unbind() {
>     dev=$1
>     if [ -e /sys/bus/pci/devices/${dev}/driver ]; then
>         echo "${dev}" > /sys/bus/pci/devices/${dev}/driver/unbind
>         while [ -e /sys/bus/pci/devices/${dev}/driver ]; do
>             sleep 0.1
>         done
>     fi
> }
>
> bind() {
>     dev=$1
>     driver=$2
>     vendor=$(cat /sys/bus/pci/devices/${dev}/vendor)
>     device=$(cat /sys/bus/pci/devices/${dev}/device)
>     echo "${vendor} ${device}" > /sys/bus/pci/drivers/${driver}/new_id
>     echo "$dev" > /sys/bus/pci/drivers/${driver}/bind
> }
>
> unbind "0000:06:00.0"
> bind "0000:06:00.0" "radeon"
> #unbind "0000:01:00.0"
>
> After restarting sddm.service (display manager) i could switch to
> secondary gpu and log in to desktop. All worked. Problem is i can not
> unbind 0000:01:00.0 so i could pass-through it. Attempt to unbind driver
> resulted in display freezing. Even secondary gpu froze.
>
>
> Rokas Kupstys
>
> On 2016.08.05 04:55, Nicolas Roy-Renaud wrote:
>> That's something you should fix in the BIOS. The boot GPU is special
>> because the motherboard has to use it to display things such as POST
>> messages and such, so it's already "tainted" by the time the kernel
>> gets a hold of it. I had to put my guest GPU on my motherboard's
>> second PCI slot because of that (can't change the boot GPU in the BIOS
>> settings), which is pretty unconveinient because it blocks access to
>> most of my sata ports.
>>
>> If there's a way to cleanly pass the boot GPU to a VM, I don't know
>> about it. I'd be interested to know too, however.
>>
>> - Nicolas
>>
>> On 2016-08-04 13:59, Rokas Kupstys wrote:
>>> Hey is it possible to make kernel use GPU other than one that is in
>>> first slot? If so - how?
>>>
>>> I have multiple PCIe slots but only first can run at max speed so i
>>> would like to use it for VGA passthrough. However if i put powerful GPU
>>> into the first slot - linux boots using that GPU. I would like to make
>>> kernel use GPU in slot 3. So result should be bios and bootloader
>>> running on gpu in slot #1, but kernel should use gpu in slot #3. I tried
>>> binding first gpu to vfio-pci driver hoping kernel would use next
>>> available gpu. That did not work, i could see one line with systemd
>>> version in low-res console (normally its high-res). I also tryed
>>> fbcon=map:1234 (not exactly being sure what im doing) but that yielded
>>> black screen. Not sure what else i could try.
>>>
>> _______________________________________________
>> vfio-users mailing list
>> vfio-users at redhat.com
>> https://www.redhat.com/mailman/listinfo/vfio-users
>>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160805/e25b765d/attachment.htm>


More information about the vfio-users mailing list