[vfio-users] New computer assembled, need help

Blank Field ihatethisfield at gmail.com
Thu Sep 24 05:00:37 UTC 2015


Use windows GUI. Or unplug the QXL device off the VM.
On Sep 24, 2015 5:45 AM, "ALG Bass" <olorin12 at gmail.com> wrote:

> Well, I've gotten Windows 10 installed under virt-manager. Working on
> getting the gtx 970 drivers installed and working (code 43 stuff).
> Right now the output is to a vnc window. I have the gtx 970 hooked up to
> my monitor on another input. How do I change it to have the display for the
> VM go to the monitor instead of the vnc window?
>
> On Wed, Sep 23, 2015 at 5:57 PM, ALG Bass <olorin12 at gmail.com> wrote:
>
>> Got up to step 5, displaying on second gpu, we are happy! Will go from
>> here, thanks for your help!
>>
>> On Wed, Sep 23, 2015 at 5:44 PM, ALG Bass <olorin12 at gmail.com> wrote:
>>
>>> Also, I'm concerned that I'm not getting any display from the second
>>> card while the first card is plugged in *outside of Linux* as well. Could
>>> that indicate that the problem isn't in Linux at all?
>>>
>>> On Wed, Sep 23, 2015 at 5:42 PM, ALG Bass <olorin12 at gmail.com> wrote:
>>>
>>>> for step 4, is that the 02:00.0 I get for my second card with 'lspci |
>>>> grep VGA'?
>>>>
>>>>
>>>> On Wed, Sep 23, 2015 at 5:11 PM, Blank Field <ihatethisfield at gmail.com>
>>>> wrote:
>>>>
>>>>> Aww, we forgot to include the mailing list and my message with the
>>>>> steps is left private:(
>>>>> About PSUs: usually they are rated by output power, and it is written
>>>>> in a small table of voltage-ampere specs on its' case.
>>>>> So yeah, if a PSU is rated 250W but 50% eff., it consumes ~500W off
>>>>> the network. And there is a power factor playing since we have an
>>>>> alternating current.
>>>>> Moreover, my system with 3 low power GPUs consumes only 101W with a
>>>>> statically overclocked CPU.
>>>>> When the GPU is offline and/or idle it consumes almost zero power and
>>>>> the PSU is underloaded. So follow my advice, we can get it going. Just be a
>>>>> little bit more verbose.
>>>>> On Sep 23, 2015 11:51 PM, "ALG Bass" <olorin12 at gmail.com> wrote:
>>>>>
>>>>>> Well, I reinstalled Fedora in full UEFI. No dice - no mirrored
>>>>>> display. I have an EVGA Supernova 750W B2 80+ Bronze PSU, and with 80%
>>>>>> efficiency it should be giving me 600W. With the FX-8350, the GTX 970 and
>>>>>> GTX 750ti, I figured it would be enough, but it might be that I'm just not
>>>>>> getting enough power?
>>>>>> I emailed Asrock and they told me to update my 'BIOS' - instant flash
>>>>>> - which I did (it updated the UEFI). Still no dice.
>>>>>> At this point, I'm probably going to stick to dual-booting, as I need
>>>>>> Photoshop for school. I think I'll try again later with a more powerful PSU
>>>>>> and see if that does the trick.
>>>>>>
>>>>>> On Wed, Sep 23, 2015 at 2:32 PM, Blank Field <
>>>>>> ihatethisfield at gmail.com> wrote:
>>>>>>
>>>>>>> When the host GPU drivers are loaded, they do not care about VGA
>>>>>>> access and make the second card output video.
>>>>>>> The first card has it's VGA stuff dedicated to the VM. Since we can
>>>>>>> only have one active VGA device per machine and most motherboards (except
>>>>>>> gigabyte) assume the first card is the main one - the first GPU is assigned
>>>>>>> to work with VGA.
>>>>>>>
>>>>>>> Anyway, what you need to do to achieve almost 100 happiness:
>>>>>>> 0. Uninstall software like Plymouth that uses VGA and framebuffer to
>>>>>>> draw some filthy loading progress bar.
>>>>>>> 1. Plug a screen into second gpu's output.
>>>>>>> 2. Boot the host headless without X server. You will use first
>>>>>>> card's VGA capabilities.
>>>>>>> 3. Install NVidia proprietary(i haven't tested nouveau) drivers on
>>>>>>> host.
>>>>>>> 4. Add BusID entry with PCI Bus:Dev:Func combo of the second (host)
>>>>>>> GPU into Device section of xorg.conf
>>>>>>> 5. Reboot with X server. The output should appear on the screen that
>>>>>>> is attached to the second GPU.
>>>>>>> 6. Make and check that the first GPU ia bound to vfio-pci and
>>>>>>> doesn't own or decode anything in /dev/vga_arbiter while the VM is not
>>>>>>> running.
>>>>>>> 7. Fire up the VM.
>>>>>>> 8. Install guest device drivers.
>>>>>>>
>>>>>>> Alternative path: screw all this, boot host with pure UEFI(no CSM!)
>>>>>>> mode, make an OVMF based guest, done.
>>>>>>> On Sep 23, 2015 9:15 PM, "ALG Bass" <olorin12 at gmail.com> wrote:
>>>>>>>
>>>>>>>> Sorry, I'm still newbish at this.
>>>>>>>>
>>>>>>>> So you're saying it is possible to only have one gpu, and switch
>>>>>>>> the host to 'headless' while the VM is running, having switched the gpu to
>>>>>>>> running the guest? Are there any tutorials detailing how to do this?
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Sep 23, 2015 at 6:35 AM, Blank Field <
>>>>>>>> ihatethisfield at gmail.com> wrote:
>>>>>>>>
>>>>>>>>> If the host system uses VGA - only one device can be outputting
>>>>>>>>> video.
>>>>>>>>> If the host system and both cards are capable of providing
>>>>>>>>> GOP(UEFI) then the output can be(on my system it is) mirrored between them
>>>>>>>>> all.
>>>>>>>>> You can have the host headless.
>>>>>>>>> You can install both cards, boot the host, install the host
>>>>>>>>> drivers specifying device BusID in xorg.conf.
>>>>>>>>> The primary card handles all host VGA including text terminals
>>>>>>>>> like tty2.
>>>>>>>>> When the primary GPU is not used by the host at all, you can use
>>>>>>>>> it with vfio.
>>>>>>>>> On Sep 23, 2015 1:08 PM, "ALG Bass" <olorin12 at gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Okay, I just got all my parts and got my computer assembled. Here
>>>>>>>>>> are the relevant parts:
>>>>>>>>>>
>>>>>>>>>> Asrock 970m Pro3 micro-atx motherboard (has PCIe x16 and PCIe x4
>>>>>>>>>> slots)
>>>>>>>>>> AMD FX8350
>>>>>>>>>> 16 gb ram
>>>>>>>>>> 2 graphics cards:
>>>>>>>>>> Geforce GTX 970 in PCIe x16 slot
>>>>>>>>>> Geforce GTX 750 ti in PCIe x4 slot
>>>>>>>>>>
>>>>>>>>>> I get display out of the GTX 970. When I plug in the GTX 750 ti I
>>>>>>>>>> get nothing - no display at all. When I have just the 750ti plugged in, in
>>>>>>>>>> either slot, I get display. For some reason when two cards are plugged in,
>>>>>>>>>> display only comes from the card in the x16 slot, as if the mobo only wants
>>>>>>>>>> to run them as SLI.
>>>>>>>>>>  I was planning on using the 750ti as my Linux graphics card and
>>>>>>>>>> the GTX 970 as my Winders gaming graphics card.
>>>>>>>>>> Anyone else had this problem? And is it possible for me to just
>>>>>>>>>> have the 970 plugged in and pass it through, leaving the Linux host with
>>>>>>>>>> nothing until I shut down the VM, in case I can't get the 750ti to work as
>>>>>>>>>> my Linux graphics card?
>>>>>>>>>>
>>>>>>>>>> Thanks!
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> vfio-users mailing list
>>>>>>>>>> vfio-users at redhat.com
>>>>>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20150924/b0ffe2df/attachment.htm>


More information about the vfio-users mailing list