[vfio-users] NVIDIA GPU Passthrough to Win10 - Driver Disabled (Code 43)

Alex Williamson alex.williamson at redhat.com
Mon Jul 25 21:02:44 UTC 2016


On Mon, 25 Jul 2016 15:50:51 -0500
Jayme Howard <g.prime at gmail.com> wrote:

> I don't have an XML example handy, but you're missing the Nvidia workaround
> flags.  In the CLI version, they look like the following:
> 
> OPTS="$OPTS -cpu
> host,kvm=off,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=Nvidia43FIX"
> 
> All those hv_ flags are necessary to get around Nvidia's virtualization
> detection.  You also don't mention what version of QEMU you're running, but
> I believe those flags were added in 2.3 or 2.4.  2.5's out now, and uses
> those as well.

Yep, XML like:

    <hyperv>
      ...
      <vendor_id state='on' value='KeenlyKVM'/>
      ...
    </hyperv>

Alternatively (less desirable), remove all the hyper-v related options
as documented in the blog.  Thanks,

Alex


> On Mon, Jul 25, 2016 at 3:45 PM, Steven Bell <stv.bell07 at gmail.com> wrote:
> 
> > Hello,
> >
> > I am currently trying to setup a Windows 10 VM on a Fedora 23 host with
> > QEMU-KVM passing through a NIC, a USB controller, and a NVIDIA GPU (GTX
> > 670). With my current setup, the NIC and USB Controller are both passed
> > through and function without issue. The GPU driver gives the message
> > "Windows has stopped this device because it has reported problems. (Code
> > 43)".
> >
> > I've been following Alex Williamson's guide (
> > http://vfio.blogspot.ca/2015/05/vfio-gpu-how-to-series-part-3-host.html )
> > and I believe I have successfully configured things on the host.
> >
> > On the host, if I use "lshw" to look at my hardware devices, I can find
> > the NIC, USB controller and both the GPU's video and audio controllers.
> > They all correctly list their drivers as "vfio-pci". All the device ids are
> > listed in the modprobe.d file and I believe the vfio-pci driver is proof
> > that this is working, and the host is not binding these devices on boot.
> >
> > I have also verified that the motherboard (MSI WORKSTATION C236A) groups
> > the PCI devices correctly. The NIC and USB controllers are in their own
> > IOMMU groups respectively, and the NVIDIA GPU has 3 items in its group. The
> > root PCIe Controller (which I believe should NOT be configured to be passed
> > through) and the Video and Audio controllers, both of which WILL be passed
> > through.
> >
> > I configure an i440FX machine using virt-manager. I set the firmware to
> > UEFI x86_64. Initially I do not make any of the PCI devices available and
> > install Windows onto the VM.
> >
> > Next, I reboot the Guest and only make the pass-through NIC available.
> > It's drivers are installed correctly and I have access to the LAN it
> > connects to. I am able to use that connection to copy vfio-drivers for
> > Balloon driver installation, as well as the most up to date NVIDIA driver
> > installer (but don't run it yet). I also install TightVNC server.
> >
> > Next I shutdown and remove all unused devices, as described in Alex's
> > guide. I remove the Display and Video devices (I will use the TightVNC
> > server from here on to connect to the Guest). I also remove the USB
> > redirect devices, the virtual NIC, etc. I add the pass-through USB
> > controller and NVIDIA audio and video devices. Before booting again, I also
> > edit the XML and add the required "<kvm><hidden state='on'/></kvm>" line in
> > the features tag. Without this, the machine blue screens every time after
> > the NVIDIA driver has been installed.
> >
> > Now I boot the Guest again, connect using the TightVNC server, and install
> > the NVIDIA driver from the installer (I've tried different versions,
> > standalone, through Windows Update etc.). The driver installs successfully
> > and requests a reboot. After rebooting, the Device Manager shows GTX 670
> > with a yellow mark and the message "Windows has stopped this device because
> > it has reported problems. (Code 43)".
> >
> > No other devices appear with an issue in the Guest's Device Manager. No
> > output is coming from the device to my screen plugged into the GPU card
> > (obviously).
> >
> > I have also checked the following:
> >     The GPU should have sufficient power. My PSU is more than powerful
> > enough. I hear the GPU fan spin up to full briefly when the Host powers on.
> >     I have checked in the Host's mobo BIOS settings that the default video
> > card is the IGD. The host boots and uses the IGD without issue.
> >     As mentioned above, all devices that should not be bound by the host
> > have vfio-pci as their listed driver .
> >     As mentioned above, the kvm hidden xml line is added. The log shows
> > the "-cpu host,kvm=off " option is used to boot the VM, and removing line
> > from the XML causes blue screen on boot so I believe it's doing it's job.
> >     No other display adapters are present or installed. I believe a
> > pass-through GPU cannot be a secondary display device, so I've made sure of
> > this.
> >
> >
> > I feel like nothing I'm doing is especially tricky, and in my mind my
> > setup SHOULD work, based on everything I've read. But honestly I've just
> > run out of ideas on how to proceed with troubleshooting this.
> >
> > Any help and ideas would be appreciated. Thanks!
> >
> >
> > _______________________________________________
> > vfio-users mailing list
> > vfio-users at redhat.com
> > https://www.redhat.com/mailman/listinfo/vfio-users
> >
> >  




More information about the vfio-users mailing list