[vfio-users] guest GPU p2p doesn't work

Alex Williamson alex.williamson at redhat.com
Wed Aug 22 14:32:43 UTC 2018


On Wed, 22 Aug 2018 13:53:19 +0000
Zhiyong WU 吴志勇 <zhiyong.wu at bitmain.com> wrote:

> HI
> 
> Today I tried to play guest GPU p2p with the following way, but failed, does anyone know the reason?
> 
> 
>   1.  Hypervisor info
> [root at localhost ~]#./qemu-system-x86_64 --version
> QEMU emulator version 2.12.1 (v2.12.1-dirty)
> Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
> 
> [root at localhost ~]# nvidia-smi topo -p2p r
>                 GPU0     GPU1     GPU2     GPU3     GPU4     GPU5     GPU6     GPU7
>  GPU0    X              OK           OK           OK           OK           OK           OK           OK
>  GPU1    OK           X              OK           OK           OK           OK           OK           OK
>  GPU2    OK           OK           X              OK           OK           OK           OK           OK
>  GPU3    OK           OK           OK           X              OK           OK           OK           OK
>  GPU4    OK           OK           OK           OK           X              OK           OK           OK
>  GPU5    OK           OK           OK           OK           OK           X              OK           OK
>  GPU6    OK           OK           OK           OK           OK           OK           X              OK
>  GPU7    OK           OK           OK           OK           OK           OK           OK           X
> 
> Legend:
> 
>   X    = Self
>   OK   = Status Ok
>   CNS  = Chipset not supported
>   GNS  = GPU not supported
>   TNS  = Topology not supported
>   NS   = Not supported
>   U    = Unknown
> [root at localhost ~]#
> 
> [root at localhost ~]# ps -ef | grep qemu
> root      2608     1  7 03:32 ?
> 00:09:42 /usr/local/qemu-2.12.1/bin/qemu-system-x86_64 -enable-kvm
> -cpu host,kvm=off -chardev
> socket,id=hmqmondev,port=55901,host=127.0.0.1,nodelay,server,nowait
> -mon chardev=hmqmondev,id=hmqmon,mode=readline -rtc
> base=utc,clock=host,driftfix=none -daemonize -nodefaults -nodefconfig
> -no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard
> -machine pc,accel=kvm -k en-us -smp 32 -name BarzHsu-AI -m 131072
> -boot order=cdn -device virtio-serial -usb -device usb-kbd -device
> usb-tablet -vga std -vnc :1 -device virtio-scsi-pci,id=scsi -drive
> file=/opt/cloud/workspace/disks/3691b8d4-04bd-4338-8134-67620d37bdc8,if=none,id=drive_0,cache=none,aio=native
> -device scsi-hd,drive=drive_0,bus=scsi.0,id=drive_0 -drive
> file=/opt/cloud/workspace/disks/24dc552b-8518-4334-92c8-f78c4db8f626,if=none,id=drive_1,cache=none,aio=native
> -device scsi-hd,drive=drive_1,bus=scsi.0,id=drive_1 -device
> vfio-pci,host=07:00.0,multifunction=on,addr=0x15,x-nv-gpudirect-clique=1
> -device vfio-pci,host=07:00.1 -device


Note that unless you provide an addr= for the audio function to place
it into the same slot as the GPU function, specifying multifunction=on
for the GPU function is pointless.


> vfio-pci,host=08:00.0,multifunction=on,addr=0x16,x-nv-gpudirect-clique=1
> -device vfio-pci,host=08:00.1 -device
> vfio-pci,host=04:00.0,multifunction=on,addr=0x17,x-nv-gpudirect-clique=1
> -device vfio-pci,host=04:00.1 -device
> vfio-pci,host=06:00.0,multifunction=on,addr=0x18,x-nv-gpudirect-clique=1
> -device vfio-pci,host=06:00.1 -device
> vfio-pci,host=0f:00.0,multifunction=on,addr=0x19,x-nv-gpudirect-clique=1
> -device vfio-pci,host=0f:00.1 -device
> vfio-pci,host=0e:00.0,multifunction=on,addr=0x1a,x-nv-gpudirect-clique=1
> -device vfio-pci,host=0e:00.1 -device
> vfio-pci,host=0d:00.0,multifunction=on,addr=0x1b,x-nv-gpudirect-clique=1
> -device vfio-pci,host=0d:00.1 -device
> vfio-pci,host=0c:00.0,multifunction=on,addr=0x1c,x-nv-gpudirect-clique=1
> -device vfio-pci,host=0c:00.1 -device
> ide-cd,drive=ide0-cd0,bus=ide.1,unit=1 -drive
> id=ide0-cd0,media=cdrom,if=none -netdev
> type=tap,id=vnet22-254,ifname=vnet22-254,vhost=on,vhostforce=off,script=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/if-up-br0-vnet22-254.sh,downscript=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/if-down-br0-vnet22-254.sh
> -device
> virtio-net-pci,netdev=vnet22-254,mac=00:22:4c:50:fe:65,addr=0xf,speed=10000
> -pidfile /opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/pid
> -chardev
> socket,path=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/qga.sock,server,nowait,id=qga0
> -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0
> -object rng-random,filename=/dev/random,id=rng0 -device
> virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000
>
>
>   1.  Guest info
>   2.  guest at BarzHsu-AI:~$ nvidia-smi topo -m
>   3.         GPU0   GPU1   GPU2   GPU3   GPU4   GPU5   GPU6   GPU7   CPU Affinity
>   4.  GPU0   X     PHB    PHB    PHB    PHB    PHB    PHB    PHB    0-31
>   5.  GPU1   PHB    X     PHB    PHB    PHB    PHB    PHB    PHB    0-31
>   6.  GPU2   PHB    PHB    X     PHB    PHB    PHB    PHB    PHB    0-31
>   7.  GPU3   PHB    PHB    PHB    X     PHB    PHB    PHB    PHB    0-31
>   8.  GPU4   PHB    PHB    PHB    PHB    X     PHB    PHB    PHB    0-31
>   9.  GPU5   PHB    PHB    PHB    PHB    PHB    X     PHB    PHB    0-31
>   10. GPU6   PHB    PHB    PHB    PHB    PHB    PHB    X     PHB    0-31
>   11. GPU7   PHB    PHB    PHB    PHB    PHB    PHB    PHB    X     0-31
>   12.
>   13. Legend:
>   14.
>   15.   X    = Self
>   16.   SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
>   17.   NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
>   18.   PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
>   19.   PXB  = Connection traversing multiple PCIe switches (without traversing the PCIe Host Bridge)
>   20.   PIX  = Connection traversing a single PCIe switch
>   21.   NV#  = Connection traversing a bonded set of # NVLinks
>   22.
> guest at BarzHsu-AI:~$ nvidia-smi topo -p2p r
> 
>   1.         GPU0   GPU1   GPU2   GPU3   GPU4   GPU5   GPU6   GPU7
>   2.   GPU0  X      CNS    CNS    CNS    CNS    CNS    CNS    CNS
>   3.   GPU1  CNS    X      CNS    CNS    CNS    CNS    CNS    CNS
>   4.   GPU2  CNS    CNS    X      CNS    CNS    CNS    CNS    CNS
>   5.   GPU3  CNS    CNS    CNS    X      CNS    CNS    CNS    CNS
>   6.   GPU4  CNS    CNS    CNS    CNS    X      CNS    CNS    CNS
>   7.   GPU5  CNS    CNS    CNS    CNS    CNS    X      CNS    CNS
>   8.   GPU6  CNS    CNS    CNS    CNS    CNS    CNS    X      CNS
>   9.   GPU7  CNS    CNS    CNS    CNS    CNS    CNS    CNS    X
>   10.
>   11. Legend:
>   12.
>   13.   X    = Self
>   14.   OK   = Status Ok
>   15.   CNS  = Chipset not supported
>   16.   GNS  = GPU not supported
>   17.   TNS  = Topology not supported
>   18.   NS   = Not supported
>   19.   U    = Unknown


Does "Chipset not supported" suggest NVIDIA has disabled GPU Direct for
i440fx regardless of providing the clique information?  Maybe try
QEMU's q35 chipset, perhaps even in a more typical PCIe configuration
with GPUs downstream of root ports.  Perhaps NVIDIA no longer honors
their own specification.  Dunno.  Thanks,

Alex




More information about the vfio-users mailing list