[vfio-users] vfio passthrough devices behind pcie switcher problem

rhett rhett rhett.kernel at gmail.com
Thu Mar 9 03:47:32 UTC 2017


somebody can help me ?

2017-03-08 14:34 GMT+08:00 rhett rhett <rhett.kernel at gmail.com>:

> here's some more error log from centos guest:
>
> Mar  7 05:38:07 localhost kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel
> Module  375.39  Tue Jan 31 20:47:00 PST 2017 (using threaded interrupts)
> Mar  7 05:38:08 localhost kernel: nvidia-modeset: Loading NVIDIA Kernel
> Mode Setting Driver for UNIX platforms  375.39  Tue Jan 31 19:41:48 PST 2017
> Mar  7 05:39:27 localhost kernel: NVRM: RmInitAdapter failed!
> (0x24:0x51:1060)
> Mar  7 05:39:27 localhost kernel: NVRM: rm_init_adapter failed for device
> bearing minor number 0
> Mar  7 05:43:40 localhost kernel: NVRM: RmInitAdapter failed!
> (0x24:0x51:1060)
> Mar  7 05:43:40 localhost kernel: NVRM: rm_init_adapter failed for device
> bearing minor number 0
> Mar  8 05:07:47 localhost kernel: nvidia: module license 'NVIDIA' taints
> kernel.
> Mar  8 05:07:47 localhost kernel: NVRM: loading NVIDIA UNIX x86_64 Kernel
> Module
>
> 2017-03-08 14:31 GMT+08:00 rhett rhett <rhett.kernel at gmail.com>:
>
>> i have two guest , a windows 2008 server and a centos 7.2 . in windows,
>> the device manager said the gpu can't start ,error code 10.
>> in centos, when i run nvidia-smi,  it said no device found.
>>
>> no specil vm configurations,  whit the same config, i can use gpu
>> successfully in my two gpu server. the biggest different is , that server
>> is no pcie switcher.
>>
>> 2017-03-08 11:55 GMT+08:00 Alex Williamson <alex.williamson at redhat.com>:
>>
>>> On Wed, 8 Mar 2017 11:26:17 +0800
>>> rhett rhett <rhett.kernel at gmail.com> wrote:
>>>
>>> > two gpus share the same irq , i found the reason. because the msi be
>>> > disabled later , so irq 140 is being reused.
>>> >
>>> > but i don't know why somebady calls vfio_pci_ioctl to disable the msi.
>>>
>>> vfio just does what the guest requests, but you're really providing
>>> hardly any more information than when you asked off list.  My wild
>>> guess, is that maybe you're running a Windows guest and not configuring
>>> the VM for a vCPU type where Windows supports MSI.  For more
>>> assistance, please provide basic information, like the QEMU command
>>> line or VM XML, also the PCI information from the host (sudo lspci
>>> -vvv), and of course any error codes in the guest or an actual
>>> description of how the device doesn't work in the guest.  Thanks,
>>>
>>> Alex
>>>
>>>
>>> > 2017-03-08 10:55 GMT+08:00 rhett rhett <rhett.kernel at gmail.com>:
>>> >
>>> > > i have a question about vfio , here is my description.
>>> > >
>>> > > i have 8 gpus in my server machine ,  but they are all behind a pcie
>>> > > bridge.  when i make a vfio passthrough , i can't use the gpus in my
>>> guest
>>> > > os.
>>> > > dmesg shows the following message
>>> > >
>>> > > [  662.208072] vfio-pci 0000:87:00.0: irq 140 for MSI/MSI-X
>>> > > [  725.761623] vfio-pci 0000:04:00.0: irq 140 for MSI/MSI-X
>>> > >
>>> > > i started two vm , one use 87 and another use 04,  dmesg shows that
>>> they
>>> > > share the same irq 140 . is this normal ?
>>> > >
>>> > > i also saw the iommu groups, each gpu stays in a separate group, and
>>> with
>>> > > no other device in group. so this means ACS works correctly ?
>>> > >
>>> > > hope to get your helps !
>>> > >
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20170309/6d43e441/attachment.htm>


More information about the vfio-users mailing list