[vfio-users] Brutal DPC Latency - how is yours? check it please and report back

Rokas Kupstys rokups at zoho.com
Mon Feb 29 08:45:42 UTC 2016


Yesterday i figured out my latency problem. All things listed everywhere
on internet failed. Last thing i tried was pinning one vcpu to two
physical cores and it brought latency down. Now i have FX-8350 CPU which
has shared FPU for each two cores so maybe thats why. With just this
pinning latency now is most of the time just above 1000μs. However under
load latency increases. I threw out iothreads and emulator pinning and
it did not affect much. Superior latency could be achieved using
isolcpus=2-7, however leaving just two cores to host is unacceptable.
With that setting latency was around 500μs without load. Good part is
that Battlefield3 no longer lags, although i observed increased loading
times on textures compared to bare metal. Not so good part is that there
still is minor sound skipping/cracking since latency is spiking up under
load. That is very disappointing. I also tried performance with two VM
cores pinned to 4 host cores - bf3 lagged enough to be unplayable. 3 vm
cores pinned to 6 host cores was already playable but sound was still
cracking. I noticed little difference between that and 4 vm cores pinned
to 8 host cores. Be nice if sound could be cleaned up. If anyone have
any ideas im all ears. Libvirt xml i use now:

>   <vcpu placement='static'>4</vcpu>
>   <cputune>
>     <vcpupin vcpu='0' cpuset='0-1'/>
>     <vcpupin vcpu='1' cpuset='2-3'/>
>     <vcpupin vcpu='2' cpuset='4-5'/>
>     <vcpupin vcpu='3' cpuset='6-7'/>
>   </cputune>
>   <features>
>     <acpi/>
>     <apic/>
>     <pae/>
>     <hap/>
>     <viridian/>
>     <hyperv>
>       <relaxed state='on'/>
>       <vapic state='on'/>
>       <spinlocks state='on' retries='8191'/>
>     </hyperv>
>     <kvm>
>       <hidden state='on'/>
>     </kvm>
>     <pvspinlock state='on'/>
>   </features>
>   <cpu mode='host-passthrough'>
>     <topology sockets='1' cores='4' threads='1'/>
>   </cpu>
>   <clock offset='utc'>
>     <timer name='rtc' tickpolicy='catchup'/>
>     <timer name='pit' tickpolicy='delay'/>
>     <timer name='hpet' present='no'/>
>     <timer name='hypervclock' present='yes'/>
>   </clock>
>
Kernel configs
> CONFIG_NO_HZ_FULL=y
> CONFIG_RCU_NOCB_CPU_ALL=y
> CONFIG_HZ_1000=y
> CONFIG_HZ=1000
I am not convinced 1000 hz tickrate is needed. Default one (300) seems
to perform equally as well from looking at latency charts. Did not get
chance to test it with bf3 yet however.


On 2016.01.12 11:12, thibaut noah wrote:
> +1 on buggy usb, i personnaly add to purchase a usb controller card
> because passing my usb headset bring me so many bugs (crashing vlc,
> lost sound on vm, glitchy audio).
> Using juste the sound from the graphic card with the screen works just
> fine.
> Though it seems i cannot run some benchmark (most likely bad cpu
> recognition since it's a vcpu), i have no problem running all my games
> in ultra.
>
> I had the same problem (unresponsive host at guest shutdown) which i
> solved by pinning vcpu threads to cores on the host (leaving first
> physical core alone) and removing the balloon while taking only 8GB of
> ram for the guest
> On Tue, 12 Jan 2016 at 08:34, Quentin Deldycke
> <quentindeldycke at gmail.com <mailto:quentindeldycke at gmail.com>> wrote:
>
>     For my part, most game are more than playable...
>     Battlefront is flawless, Fallout 4 too. I nearly never find laggy
>     games.
>
>     For my part, i use my gpu as a sound card and i have totally no
>     glitches.
>
>     Main problem for me is heroes of the storm (playing with friends,
>     brother...),
>     which is completely catastrophic. My fps are playing yoyo between
>     200 and 15 -_-'
>
>
>     But yes. Motherboards are buggy as f***. For my parts, once by month,
>     the dropbox of "gpu default output" on bios to select igpu or pcie
>     card seems
>     to be totally useless. pcie is used in any cases. I need to cmos
>     motherboard.
>
>     Also, usb is hardly buggy sometimes... When i boot the pc, it
>     doesn't work and detect
>     a buggy device on one of my root ports. Which delay boot for ~1
>     minute. But after this,
>     all ports are working...
>
>     -- 
>     Deldycke Quentin
>
>
>     On 11 January 2016 at 23:44, Frank Wilson <frank at zenlambda.com
>     <mailto:frank at zenlambda.com>> wrote:
>
>         Not sure if this is adding much to the conversation but I gave
>         up on
>         my GPU passthrough project because of horrible DPC latency. Didn't
>         affect the graphics so much but sound was terrible.
>
>         My gut feeling was the hardware wasn't good enough. I was
>         using an AMD
>         8 core piledriver cpu with an Asus crosshair 5 motherboard. The
>         motherboard has broken IVRS tables so it's not supported by Xen.
>
>         I've been looking with great interest at Intel skylake. I'm also
>         interested in technologies like xen-gt. However I just don't trust
>         motherboard manufacturers to deliver firmware without critical
>         bugs
>         that knock out virtualisation features that I need.
>
>         A lot of the GPU passthrough demos are very impressive but
>         often don't
>         demonstrate dropout-free audio.
>
>
>         Frank
>
>         On 11 January 2016 at 08:59, rndbit <rndbit at sysret.net
>         <mailto:rndbit at sysret.net>> wrote:
>         > Tried Milos' config too - DPC latency got worse. I use AMD
>         cpu though so its
>         > hardly comparable.
>         > One thing to note is that both VM and bare metal (same OS)
>         score around 5k
>         > points in 3dmark fire strike test (VM 300 points less).
>         Sounds not too bad
>         > but in reality bf4 is pretty much unplayable in VM due to
>         bad performance
>         > and sound glitches while playing it on bare metal is just
>         fine. Again DPC
>         > latency on bare metal even under load is ok - occasional
>         spike here and
>         > there but mostly its within norm. Any kind of load on VM
>         makes DPC go nuts
>         > and performance is terrible. I even tried isolcpus=4,5,6,7
>         and binding vm to
>         > those free cores - its all the same.
>         >
>         > Interesting observation is that i used to play titanfall
>         without a hitch in
>         > VM some time in the past, 3.10 kernel or so (no patches).
>         When i get free
>         > moment ill try downgrading kernel, maybe problem is there.
>         >
>         >
>         > On 2016.01.11 10:39, Quentin Deldycke wrote:
>         >
>         > Also, i juste saw something:
>         >
>         > You use ultra (4k?) settings on a 770gtx. This is too heavy
>         for it. You have
>         > less than 10fps. So in fact if you loose let's say 10% of
>         performance, you
>         > will barely see it.
>         >
>         > What we search is a very high reponse time. Could you please
>         compare your
>         > system with a less heavy benchmark. It is easier to see the
>         difference at
>         > ~50-70 fps.
>         >
>         > In my case, this configuration work. But my fps fluctuate
>         quite a lot. If
>         > you are a bit a serious gamer, this falls are not an option
>         during game :)
>         >
>         > --
>         > Deldycke Quentin
>         >
>         >
>         > On 11 January 2016 at 08:54, Quentin Deldycke
>         <quentindeldycke at gmail.com <mailto:quentindeldycke at gmail.com>>
>         > wrote:
>         >>
>         >> Using this mode,
>         >>
>         >> DPC Latency is hugely buggy using this mode.
>         >>
>         >> My fps are also moving on an apocaliptic way: from 80 to 45
>         fps without
>         >> moving on ungine valley.
>         >>
>         >> Do you have anything working on your linux? (i have plasma
>         doing nothing
>         >> on another screen)
>         >>
>         >> Ungine heaven went back to 2600 points from 3100
>         >> Cinebench r15: single core 124
>         >>
>         >>
>         >> Could you please send your whole xml file, qemu version and
>         kernel config
>         >> / boot?
>         >>
>         >> I will try to get 3dmark and verify host / virtual comparison
>         >>
>         >> --
>         >> Deldycke Quentin
>         >>
>         >>
>         >> On 9 January 2016 at 20:24, Milos Kaurin
>         <milos.kaurin at gmail.com <mailto:milos.kaurin at gmail.com>> wrote:
>         >>>
>         >>> My details:
>         >>> Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
>         >>> 32GB total ram
>         >>> hugetables at 16x1GB for the guest (didn't have much to do
>         with 3dmark
>         >>> results)
>         >>>
>         >>> I have had the best performance with:
>         >>>
>         >>>   <vcpu placement='static'>8</vcpu>
>         >>>   <cpu mode='custom' match='exact'>
>         >>>     <model fallback='allow'>host-passthrough</model>
>         >>>     <topology sockets='1' cores='4' threads='2'/>
>         >>>   </cpu>
>         >>>
>         >>> No CPU pinning on either guest or host
>         >>>
>         >>> Benchmark example (Bare metal Win10 vs Fedora Guest Win10)
>         >>> http://www.3dmark.com/compare/fs/7076732/fs/7076627#
>         >>>
>         >>>
>         >>> Could you try my settings and report back?
>         >>>
>         >>> On Sat, Jan 9, 2016 at 3:14 PM, Quentin Deldycke
>         >>> <quentindeldycke at gmail.com
>         <mailto:quentindeldycke at gmail.com>> wrote:
>         >>> > I use virsh:
>         >>> >
>         >>> > ===SNIP===
>         >>> >   <vcpu placement='static'>3</vcpu>
>         >>> >   <cputune>
>         >>> >     <vcpupin vcpu='0' cpuset='1'/>
>         >>> >     <vcpupin vcpu='1' cpuset='2'/>
>         >>> >     <vcpupin vcpu='2' cpuset='3'/>
>         >>> >     <emulatorpin cpuset='6-7'/>
>         >>> >   </cputune>
>         >>> > ===SNAP===
>         >>> >
>         >>> > I have a prepare script running:
>         >>> >
>         >>> > ===SNIP===
>         >>> > sudo mkdir /cpuset
>         >>> > sudo mount -t cpuset none /cpuset/
>         >>> > cd /cpuset
>         >>> > echo 0 | sudo tee -a cpuset.cpu_exclusive
>         >>> > echo 0 | sudo tee -a cpuset.mem_exclusive
>         >>> >
>         >>> > sudo mkdir sys
>         >>> > echo 'Building shield for core system... threads 0 and
>         4, and we place
>         >>> > all
>         >>> > runnning tasks there'
>         >>> > /bin/echo 0,4 | sudo tee -a sys/cpuset.cpus
>         >>> > /bin/echo 0 | sudo tee -a sys/cpuset.mems
>         >>> > /bin/echo 0 | sudo tee -a sys/cpuset.cpu_exclusive
>         >>> > /bin/echo 0 | sudo tee -a sys/cpuset.mem_exclusive
>         >>> > for T in `cat tasks`; do sudo bash -c "/bin/echo $T >
>         >>> > sys/tasks">/dev/null
>         >>> > 2>&1 ; done
>         >>> > cd -
>         >>> > ===SNAP===
>         >>> >
>         >>> > Note that i use this command line for the kernel
>         >>> > nohz_full=1,2,3,4,5,6,7 rcu_nocbs=1,2,3,4,5,6,7
>         default_hugepagesz=1G
>         >>> > hugepagesz=1G hugepages=12
>         >>> >
>         >>> >
>         >>> > --
>         >>> > Deldycke Quentin
>         >>> >
>         >>> >
>         >>> > On 9 January 2016 at 15:40, rndbit <rndbit at sysret.net
>         <mailto:rndbit at sysret.net>> wrote:
>         >>> >>
>         >>> >> Mind posting actual commands how you achieved this?
>         >>> >>
>         >>> >> All im doing now is this:
>         >>> >>
>         >>> >> cset set -c 0-3 system
>         >>> >> cset proc -m -f root -t system -k
>         >>> >>
>         >>> >>   <vcpu placement='static'>4</vcpu>
>         >>> >>   <cputune>
>         >>> >>     <vcpupin vcpu='0' cpuset='4'/>
>         >>> >>     <vcpupin vcpu='1' cpuset='5'/>
>         >>> >>     <vcpupin vcpu='2' cpuset='6'/>
>         >>> >>     <vcpupin vcpu='3' cpuset='7'/>
>         >>> >>     <emulatorpin cpuset='0-3'/>
>         >>> >>   </cputune>
>         >>> >>
>         >>> >> Basically this puts most of threads to 0-3 cores
>         including emulator
>         >>> >> threads. Some threads cant be moved though so they
>         remain on 4-7
>         >>> >> cores. VM
>         >>> >> is given 4-7 cores. It works better but there is still
>         much to be
>         >>> >> desired.
>         >>> >>
>         >>> >>
>         >>> >>
>         >>> >> On 2016.01.09 15:59, Quentin Deldycke wrote:
>         >>> >>
>         >>> >> Hello,
>         >>> >>
>         >>> >> Using cpuset, i was using the vm with:
>         >>> >>
>         >>> >> Core 0: threads 0 & 4: linux + emulator pin
>         >>> >> Core 1,2,3: threads 1,2,3,5,6,7: windows
>         >>> >>
>         >>> >> I tested with:
>         >>> >> Core 0: threads 0 & 4: linux
>         >>> >> Core 1,2,3: threads 1,2,3: windows
>         >>> >> Core 1,2,3: threads 5,6,7: emulator
>         >>> >>
>         >>> >> The difference between both is huge (DPC latency is
>         mush more stable):
>         >>> >> Performance on single core went up to 50% (cinebench
>         ratio by core
>         >>> >> from
>         >>> >> 100 to 150 points)
>         >>> >> Performance on gpu went up to 20% (cinebench from 80fps
>         to 100+)
>         >>> >> Performance on "heroes of the storm" went from 20~30
>         fps to stable 60
>         >>> >> (and
>         >>> >> much time more than 100)
>         >>> >>
>         >>> >> (performance of Unigine Heaven went from 2700 points to
>         3100 points)
>         >>> >>
>         >>> >> The only sad thing is that i have the 3 idle threads
>         which are barely
>         >>> >> used... Is there any way to put them back to windows?
>         >>> >>
>         >>> >> --
>         >>> >> Deldycke Quentin
>         >>> >>
>         >>> >>
>         >>> >> On 29 December 2015 at 17:38, Michael Bauer
>         <michael at m-bauer.org <mailto:michael at m-bauer.org>>
>         >>> >> wrote:
>         >>> >>>
>         >>> >>> I noticed that attaching a DVD-Drive from the host
>         leads to HUGE
>         >>> >>> delays.
>         >>> >>> I had attached my /dev/sr0 to the guest and even
>         without a DVD in the
>         >>> >>> drive
>         >>> >>> this was causing huge lag about once per second.
>         >>> >>>
>         >>> >>> Best regards
>         >>> >>> Michael
>         >>> >>>
>         >>> >>>
>         >>> >>> Am 28.12.2015 um 19:30 schrieb rndbit:
>         >>> >>>
>         >>> >>> 4000μs-16000μs here, its terrible.
>         >>> >>> Tried whats said on
>         >>> >>> https://lime-technology.com/forum/index.php?topic=43126.15
>         >>> >>> Its a bit better with this:
>         >>> >>>
>         >>> >>>   <vcpu placement='static'>4</vcpu>
>         >>> >>>   <cputune>
>         >>> >>>     <vcpupin vcpu='0' cpuset='4'/>
>         >>> >>>     <vcpupin vcpu='1' cpuset='5'/>
>         >>> >>>     <vcpupin vcpu='2' cpuset='6'/>
>         >>> >>>     <vcpupin vcpu='3' cpuset='7'/>
>         >>> >>>     <emulatorpin cpuset='0-3'/>
>         >>> >>>   </cputune>
>         >>> >>>
>         >>> >>> I tried isolcpus but it did not yield visible
>         benefits. ndis.sys is
>         >>> >>> big
>         >>> >>> offender here but i dont really understand why.
>         Removing network
>         >>> >>> interface
>         >>> >>> from VM makes usbport.sys take over as biggest
>         offender. All this
>         >>> >>> happens
>         >>> >>> with performance governor of all cpu cores:
>         >>> >>>
>         >>> >>> echo performance | tee
>         >>> >>> /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
>         >/dev/null
>         >>> >>>
>         >>> >>> Cores remain clocked at 4k mhz. I dont know what else
>         i could try.
>         >>> >>> Does
>         >>> >>> anyone have any ideas..?
>         >>> >>>
>         >>> >>> On 2015.10.29 08:03, Eddie Yen wrote:
>         >>> >>>
>         >>> >>> I tested again with VM reboot, I found that this time
>         is about
>         >>> >>> 1000~1500μs.
>         >>> >>> Also I found that it easily get high while hard drive
>         is loading, but
>         >>> >>> only few times.
>         >>> >>>
>         >>> >>> Which specs you're using? Maybe it depends on CPU or
>         patches.
>         >>> >>>
>         >>> >>> 2015-10-29 13:44 GMT+08:00 Blank Field
>         <ihatethisfield at gmail.com <mailto:ihatethisfield at gmail.com>>:
>         >>> >>>>
>         >>> >>>> If i understand it right, this software has a fixed
>         latency error of
>         >>> >>>> 1
>         >>> >>>> ms(1000us) in windows 8-10 due to different kernel timer
>         >>> >>>> implementation. So
>         >>> >>>> i guess your latency is very good.
>         >>> >>>>
>         >>> >>>> On Oct 29, 2015 8:40 AM, "Eddie Yen"
>         <missile0407 at gmail.com <mailto:missile0407 at gmail.com>> wrote:
>         >>> >>>>>
>         >>> >>>>> Thanks for information! And sorry I don'r read
>         carefully at
>         >>> >>>>> beginning
>         >>> >>>>> message.
>         >>> >>>>>
>         >>> >>>>> For my result, I got about 1000μs below and only few
>         times got
>         >>> >>>>> 1000μs
>         >>> >>>>> above when idling.
>         >>> >>>>>
>         >>> >>>>> I'm using 4820K and used 4 threads to VM, also  I
>         set these 4
>         >>> >>>>> threads
>         >>> >>>>> as 4 cores in VM settings.
>         >>> >>>>> The OS is Windows 10.
>         >>> >>>>>
>         >>> >>>>> 2015-10-29 13:21 GMT+08:00 Blank Field
>         <ihatethisfield at gmail.com <mailto:ihatethisfield at gmail.com>>:
>         >>> >>>>>>
>         >>> >>>>>> I think they're using this:
>         >>> >>>>>> www.thesycon.de/deu/latency_check.shtml
>         <http://www.thesycon.de/deu/latency_check.shtml>
>         >>> >>>>>>
>         >>> >>>>>> On Oct 29, 2015 6:11 AM, "Eddie Yen"
>         <missile0407 at gmail.com <mailto:missile0407 at gmail.com>>
>         >>> >>>>>> wrote:
>         >>> >>>>>>>
>         >>> >>>>>>> Sorry, but how to check DPC Latency?
>         >>> >>>>>>>
>         >>> >>>>>>> 2015-10-29 10:08 GMT+08:00 Nick Sukharev
>         >>> >>>>>>> <nicksukharev at gmail.com
>         <mailto:nicksukharev at gmail.com>>:
>         >>> >>>>>>>>
>         >>> >>>>>>>> I just checked on W7 and I get 3000μs-4000μs one
>         one of the
>         >>> >>>>>>>> guests
>         >>> >>>>>>>> when 3 guests are running.
>         >>> >>>>>>>>
>         >>> >>>>>>>> On Wed, Oct 28, 2015 at 4:52 AM, Sergey Vlasov
>         >>> >>>>>>>> <sergey at vlasov.me <mailto:sergey at vlasov.me>>
>         >>> >>>>>>>> wrote:
>         >>> >>>>>>>>>
>         >>> >>>>>>>>> On 27 October 2015 at 18:38, LordZiru
>         <lordziru at gmail.com <mailto:lordziru at gmail.com>>
>         >>> >>>>>>>>> wrote:
>         >>> >>>>>>>>>>
>         >>> >>>>>>>>>> I have brutal DPC Latency on qemu, no matter if
>         using
>         >>> >>>>>>>>>> pci-assign
>         >>> >>>>>>>>>> or vfio-pci or without any passthrought,
>         >>> >>>>>>>>>>
>         >>> >>>>>>>>>> my DPC Latency is like:
>         >>> >>>>>>>>>>
>         10000,500,8000,6000,800,300,12000,9000,700,2000,9000
>         >>> >>>>>>>>>> and on native windows 7 is like:
>         >>> >>>>>>>>>> 20,30,20,50,20,30,20,20,30
>         >>> >>>>>>>>>
>         >>> >>>>>>>>>
>         >>> >>>>>>>>> In Windows 10 guest I constantly have red bars
>         around 3000μs
>         >>> >>>>>>>>> (microseconds), spiking sometimes up to 10000μs.
>         >>> >>>>>>>>>
>         >>> >>>>>>>>>>
>         >>> >>>>>>>>>> I don't know how to fix it.
>         >>> >>>>>>>>>> this matter for me because i are using USB
>         Sound Card for my
>         >>> >>>>>>>>>> VMs,
>         >>> >>>>>>>>>> and i get sound drop-outs every 0-4 secounds
>         >>> >>>>>>>>>>
>         >>> >>>>>>>>>
>         >>> >>>>>>>>> That bugs me a lot too. I also use an external
>         USB card and my
>         >>> >>>>>>>>> DAW
>         >>> >>>>>>>>> periodically drops out :(
>         >>> >>>>>>>>>
>         >>> >>>>>>>>> I haven't tried CPU pinning yet though. And
>         perhaps I should
>         >>> >>>>>>>>> try
>         >>> >>>>>>>>> Windows 7.
>         >>> >>>>>>>>>
>         >>> >>>>>>>>>
>         >>> >>>>>>>>> _______________________________________________
>         >>> >>>>>>>>> vfio-users mailing list
>         >>> >>>>>>>>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>>>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>>>>>>>
>         >>> >>>>>>>>
>         >>> >>>>>>>>
>         >>> >>>>>>>> _______________________________________________
>         >>> >>>>>>>> vfio-users mailing list
>         >>> >>>>>>>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>>>>>>
>         >>> >>>>>>>
>         >>> >>>>>>>
>         >>> >>>>>>> _______________________________________________
>         >>> >>>>>>> vfio-users mailing list
>         >>> >>>>>>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>>>>>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>>>>>
>         >>> >>>>>
>         >>> >>>
>         >>> >>>
>         >>> >>>
>         >>> >>> _______________________________________________
>         >>> >>> vfio-users mailing list
>         >>> >>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>
>         >>> >>>
>         >>> >>>
>         >>> >>>
>         >>> >>> _______________________________________________
>         >>> >>> vfio-users mailing list
>         >>> >>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>
>         >>> >>>
>         >>> >>>
>         >>> >>> _______________________________________________
>         >>> >>> vfio-users mailing list
>         >>> >>> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >>> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>>
>         >>> >>
>         >>> >>
>         >>> >>
>         >>> >> _______________________________________________
>         >>> >> vfio-users mailing list
>         >>> >> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>
>         >>> >>
>         >>> >>
>         >>> >> _______________________________________________
>         >>> >> vfio-users mailing list
>         >>> >> vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> >> https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >>
>         >>> >
>         >>> >
>         >>> > _______________________________________________
>         >>> > vfio-users mailing list
>         >>> > vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         >>> > https://www.redhat.com/mailman/listinfo/vfio-users
>         >>> >
>         >>
>         >>
>         >
>         >
>         >
>         > _______________________________________________
>         > vfio-users mailing list
>         > vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         > https://www.redhat.com/mailman/listinfo/vfio-users
>         >
>         >
>         >
>         > _______________________________________________
>         > vfio-users mailing list
>         > vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         > https://www.redhat.com/mailman/listinfo/vfio-users
>         >
>
>         _______________________________________________
>         vfio-users mailing list
>         vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>         https://www.redhat.com/mailman/listinfo/vfio-users
>
>
>     _______________________________________________
>     vfio-users mailing list
>     vfio-users at redhat.com <mailto:vfio-users at redhat.com>
>     https://www.redhat.com/mailman/listinfo/vfio-users
>
>
>
> _______________________________________________
> vfio-users mailing list
> vfio-users at redhat.com
> https://www.redhat.com/mailman/listinfo/vfio-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160229/20f4d0aa/attachment.htm>


More information about the vfio-users mailing list