[vfio-users] guest GPU p2p doesn't work

Alex Williamson alex.williamson at redhat.com
Wed Aug 22 15:30:44 UTC 2018


On Wed, 22 Aug 2018 15:04:31 +0000
Zhiyong WU 吴志勇 <zhiyong.wu at bitmain.com> wrote:

> HI, Alex
> 
> Based on your suggestion, I changed something such as the following, but it doesn’t still work, Do you still have other suggestions?
> 
> root      6836     1 38 06:55 ?        00:01:13 /usr/local/qemu-2.9.1/bin/qemu-system-x86_64 -enable-kvm -cpu host,kvm=off -chardev socket,id=hmqmondev,port=55901,host=127.0.0.1,nodelay,server,nowait -mon chardev=hmqmondev,id=hmqmon,mode=readline -rtc base=utc,clock=host,driftfix=none -daemonize -nodefaults -nodefconfig -no-kvm-pit-reinjection -global kvm-pit.lost_tick_policy=discard -machine q35,accel=kvm -k en-us -smp 32 -name BarzHsu-AI -m 131072 -boot order=cdn -device virtio-serial -usb -device usb-kbd -device usb-tablet -vga std -vnc :1 -device virtio-scsi-pci,id=scsi -drive file=/opt/cloud/workspace/disks/3691b8d4-04bd-4338-8134-67620d37bdc8,if=none,id=drive_0,cache=none,aio=native -device scsi-hd,drive=drive_0,bus=scsi.0,id=drive_0 -drive file=/opt/cloud/workspace/disks/24dc552b-8518-4334-92c8-f78c4db8f626,if=none,id=drive_1,cache=none,aio=native -device scsi-hd,drive=drive_1,bus=scsi.0,id=drive_1 -device vfio-pci,host=07:00.0,addr=0x15,x-nv-gpudirect-clique=0 -device vfio-pci,host=08:00.0,addr=0x16,x-nv-gpudirect-clique=0 -device vfio-pci,host=04:00.0,addr=0x17,x-nv-gpudirect-clique=0 -device vfio-pci,host=06:00.0,addr=0x18,x-nv-gpudirect-clique=0 -device vfio-pci,host=0f:00.0,addr=0x19,x-nv-gpudirect-clique=8 -device vfio-pci,host=0e:00.0,addr=0x1a,x-nv-gpudirect-clique=8 -device vfio-pci,host=0d:00.0,addr=0x1b,x-nv-gpudirect-clique=8 -device vfio-pci,host=0c:00.0,addr=0x1c,x-nv-gpudirect-clique=8 -device vfio-pci,host=0c:00.1 -netdev type=tap,id=vnet22-254,ifname=vnet22-254,vhost=on,vhostforce=off,script=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/if-up-br0-vnet22-254.sh,downscript=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/if-down-br0-vnet22-254.sh -device virtio-net-pci,netdev=vnet22-254,mac=00:22:4c:50:fe:65,addr=0xf,speed=10000 -pidfile /opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/pid -chardev socket,path=/opt/cloud/workspace/servers/6af6cf5b-5c97-426d-92a6-972c0c40c78a/qga.sock,server,nowait,id=qga0 -device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0 -object rng-random,filename=/dev/random,id=rng0 -device virtio-rng-pci,rng=rng0,max-bytes=1024,period=1000
> 
> 
> guest at BarzHsu-AI:~$ nvidia-smi topo -p2p r
>  	GPU0	GPU1	GPU2	GPU3	GPU4	GPU5	GPU6	GPU7	
>  GPU0	X	CNS	CNS	CNS	CNS	CNS	CNS	CNS	
>  GPU1	CNS	X	CNS	CNS	CNS	CNS	CNS	CNS	
>  GPU2	CNS	CNS	X	CNS	CNS	CNS	CNS	CNS	
>  GPU3	CNS	CNS	CNS	X	CNS	CNS	CNS	CNS	
>  GPU4	CNS	CNS	CNS	CNS	X	CNS	CNS	CNS	
>  GPU5	CNS	CNS	CNS	CNS	CNS	X	CNS	CNS	
>  GPU6	CNS	CNS	CNS	CNS	CNS	CNS	X	CNS	
>  GPU7	CNS	CNS	CNS	CNS	CNS	CNS	CNS	X	
> 
> Legend:
> 
>   X    = Self
>   OK   = Status Ok
>   CNS  = Chipset not supported
>   GNS  = GPU not supported
>   TNS  = Topology not supported
>   NS   = Not supported
>   U    = Unknown

As per the previous email, you could add root ports to the q35 VM
configuration and place each of the GPUs downstream of a root port, but
if NVIDIA has decided to blacklist the chipset as a whole, then there's
not much we can do.  You might try older drivers in the guest to see if
this is something they've changed.  IIRC my testing was with K-series
Quadro cards and evidence that p2p was being used was noted by the
results of one of the CUDA toolkit sample drivers measuring p2p latency
and bandwidth.  I don't know what nvidia-smi would have reported for
that configuration, I didn't know it had this capability.  Thanks,

Alex




More information about the vfio-users mailing list