<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Yesterday i figured out my latency problem. All things listed
everywhere on internet failed. Last thing i tried was pinning one
vcpu to two physical cores and it brought latency down. Now i have
FX-8350 CPU which has shared FPU for each two cores so maybe thats
why. With just this pinning latency now is most of the time just
above 1000μs. However under load latency increases. I threw out
iothreads and emulator pinning and it did not affect much. Superior
latency could be achieved using isolcpus=2-7, however leaving just
two cores to host is unacceptable. With that setting latency was
around 500μs without load. Good part is that Battlefield3 no longer
lags, although i observed increased loading times on textures
compared to bare metal. Not so good part is that there still is
minor sound skipping/cracking since latency is spiking up under
load. That is very disappointing. I also tried performance with two
VM cores pinned to 4 host cores - bf3 lagged enough to be
unplayable. 3 vm cores pinned to 6 host cores was already playable
but sound was still cracking. I noticed little difference between
that and 4 vm cores pinned to 8 host cores. Be nice if sound could
be cleaned up. If anyone have any ideas im all ears. Libvirt xml i
use now:<br>
<br>
<blockquote type="cite"> <vcpu
placement='static'>4</vcpu><br>
<cputune><br>
<vcpupin vcpu='0' cpuset='0-1'/><br>
<vcpupin vcpu='1' cpuset='2-3'/><br>
<vcpupin vcpu='2' cpuset='4-5'/><br>
<vcpupin vcpu='3' cpuset='6-7'/><br>
</cputune><br>
<features><br>
<acpi/><br>
<apic/><br>
<pae/><br>
<hap/><br>
<viridian/><br>
<hyperv><br>
<relaxed state='on'/><br>
<vapic state='on'/><br>
<spinlocks state='on' retries='8191'/><br>
</hyperv><br>
<kvm><br>
<hidden state='on'/><br>
</kvm><br>
<pvspinlock state='on'/><br>
</features><br>
<cpu mode='host-passthrough'><br>
<topology sockets='1' cores='4' threads='1'/><br>
</cpu><br>
<clock offset='utc'><br>
<timer name='rtc' tickpolicy='catchup'/><br>
<timer name='pit' tickpolicy='delay'/><br>
<timer name='hpet' present='no'/><br>
<timer name='hypervclock' present='yes'/><br>
</clock><br>
<br>
</blockquote>
Kernel configs<br>
<blockquote type="cite">
CONFIG_NO_HZ_FULL=y<br>
CONFIG_RCU_NOCB_CPU_ALL=y<br>
CONFIG_HZ_1000=y<br>
CONFIG_HZ=1000</blockquote>
I am not convinced 1000 hz tickrate is needed. Default one (300)
seems to perform equally as well from looking at latency charts. Did
not get chance to test it with bf3 yet however.<br>
<br>
<br>
<div class="moz-cite-prefix">On 2016.01.12 11:12, thibaut noah
wrote:<br>
</div>
<blockquote
cite="mid:CANNiY3dYjuo3H4UdtAdGVjB_8XRG3vBCA9u9SDd9pBy-ykQ09A@mail.gmail.com"
type="cite">+1 on buggy usb, i personnaly add to purchase a usb
controller card because passing my usb headset bring me so many
bugs (crashing vlc, lost sound on vm, glitchy audio).<br>
Using juste the sound from the graphic card with the screen works
just fine.<br>
Though it seems i cannot run some benchmark (most likely bad cpu
recognition since it's a vcpu), i have no problem running all my
games in ultra. <br>
<br>
I had the same problem (unresponsive host at guest shutdown) which
i solved by pinning vcpu threads to cores on the host (leaving
first physical core alone) and removing the balloon while taking
only 8GB of ram for the guest<br>
<div class="gmail_quote">
<div dir="ltr">On Tue, 12 Jan 2016 at 08:34, Quentin Deldycke
<<a moz-do-not-send="true"
href="mailto:quentindeldycke@gmail.com">quentindeldycke@gmail.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">For my part, most game are more than
playable...
<div>Battlefront is flawless, Fallout 4 too. I nearly never
find laggy games.</div>
<div><br>
</div>
<div>For my part, i use my gpu as a sound card and i have
totally no glitches.</div>
<div><br>
</div>
<div>Main problem for me is heroes of the storm (playing
with friends, brother...),</div>
<div>which is completely catastrophic. My fps are playing
yoyo between 200 and 15 -_-'</div>
<div><br>
</div>
<div><br>
</div>
<div>But yes. Motherboards are buggy as f***. For my parts,
once by month,</div>
<div>the dropbox of "gpu default output" on bios to select
igpu or pcie card seems</div>
<div>to be totally useless. pcie is used in any cases. I
need to cmos motherboard.</div>
<div><br>
</div>
<div>Also, usb is hardly buggy sometimes... When i boot the
pc, it doesn't work and detect</div>
<div>a buggy device on one of my root ports. Which delay
boot for ~1 minute. But after this,</div>
<div>all ports are working...</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div>
<div dir="ltr">--
<div>Deldycke Quentin<br>
</div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="gmail_extra">
<br>
<div class="gmail_quote">On 11 January 2016 at 23:44, Frank
Wilson <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:frank@zenlambda.com" target="_blank">frank@zenlambda.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Not
sure if this is adding much to the conversation but I
gave up on<br>
my GPU passthrough project because of horrible DPC
latency. Didn't<br>
affect the graphics so much but sound was terrible.<br>
<br>
My gut feeling was the hardware wasn't good enough. I
was using an AMD<br>
8 core piledriver cpu with an Asus crosshair 5
motherboard. The<br>
motherboard has broken IVRS tables so it's not supported
by Xen.<br>
<br>
I've been looking with great interest at Intel skylake.
I'm also<br>
interested in technologies like xen-gt. However I just
don't trust<br>
motherboard manufacturers to deliver firmware without
critical bugs<br>
that knock out virtualisation features that I need.<br>
<br>
A lot of the GPU passthrough demos are very impressive
but often don't<br>
demonstrate dropout-free audio.<br>
<span><font color="#888888"><br>
<br>
Frank<br>
</font></span>
<div>
<div><br>
On 11 January 2016 at 08:59, rndbit <<a
moz-do-not-send="true"
href="mailto:rndbit@sysret.net" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:rndbit@sysret.net">rndbit@sysret.net</a></a>>
wrote:<br>
> Tried Milos' config too - DPC latency got
worse. I use AMD cpu though so its<br>
> hardly comparable.<br>
> One thing to note is that both VM and bare
metal (same OS) score around 5k<br>
> points in 3dmark fire strike test (VM 300
points less). Sounds not too bad<br>
> but in reality bf4 is pretty much unplayable in
VM due to bad performance<br>
> and sound glitches while playing it on bare
metal is just fine. Again DPC<br>
> latency on bare metal even under load is ok -
occasional spike here and<br>
> there but mostly its within norm. Any kind of
load on VM makes DPC go nuts<br>
> and performance is terrible. I even tried
isolcpus=4,5,6,7 and binding vm to<br>
> those free cores - its all the same.<br>
><br>
> Interesting observation is that i used to play
titanfall without a hitch in<br>
> VM some time in the past, 3.10 kernel or so (no
patches). When i get free<br>
> moment ill try downgrading kernel, maybe
problem is there.<br>
><br>
><br>
> On 2016.01.11 10:39, Quentin Deldycke wrote:<br>
><br>
> Also, i juste saw something:<br>
><br>
> You use ultra (4k?) settings on a 770gtx. This
is too heavy for it. You have<br>
> less than 10fps. So in fact if you loose let's
say 10% of performance, you<br>
> will barely see it.<br>
><br>
> What we search is a very high reponse time.
Could you please compare your<br>
> system with a less heavy benchmark. It is
easier to see the difference at<br>
> ~50-70 fps.<br>
><br>
> In my case, this configuration work. But my fps
fluctuate quite a lot. If<br>
> you are a bit a serious gamer, this falls are
not an option during game :)<br>
><br>
> --<br>
> Deldycke Quentin<br>
><br>
><br>
> On 11 January 2016 at 08:54, Quentin Deldycke
<<a moz-do-not-send="true"
href="mailto:quentindeldycke@gmail.com"
target="_blank">quentindeldycke@gmail.com</a>><br>
> wrote:<br>
>><br>
>> Using this mode,<br>
>><br>
>> DPC Latency is hugely buggy using this
mode.<br>
>><br>
>> My fps are also moving on an apocaliptic
way: from 80 to 45 fps without<br>
>> moving on ungine valley.<br>
>><br>
>> Do you have anything working on your linux?
(i have plasma doing nothing<br>
>> on another screen)<br>
>><br>
>> Ungine heaven went back to 2600 points from
3100<br>
>> Cinebench r15: single core 124<br>
>><br>
>><br>
>> Could you please send your whole xml file,
qemu version and kernel config<br>
>> / boot?<br>
>><br>
>> I will try to get 3dmark and verify host /
virtual comparison<br>
>><br>
>> --<br>
>> Deldycke Quentin<br>
>><br>
>><br>
>> On 9 January 2016 at 20:24, Milos Kaurin
<<a moz-do-not-send="true"
href="mailto:milos.kaurin@gmail.com"
target="_blank">milos.kaurin@gmail.com</a>>
wrote:<br>
>>><br>
>>> My details:<br>
>>> Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz<br>
>>> 32GB total ram<br>
>>> hugetables@16x1GB for the guest (didn't
have much to do with 3dmark<br>
>>> results)<br>
>>><br>
>>> I have had the best performance with:<br>
>>><br>
>>> <vcpu
placement='static'>8</vcpu><br>
>>> <cpu mode='custom'
match='exact'><br>
>>> <model
fallback='allow'>host-passthrough</model><br>
>>> <topology sockets='1' cores='4'
threads='2'/><br>
>>> </cpu><br>
>>><br>
>>> No CPU pinning on either guest or host<br>
>>><br>
>>> Benchmark example (Bare metal Win10 vs
Fedora Guest Win10)<br>
>>> <a moz-do-not-send="true"
href="http://www.3dmark.com/compare/fs/7076732/fs/7076627#"
rel="noreferrer" target="_blank">http://www.3dmark.com/compare/fs/7076732/fs/7076627#</a><br>
>>><br>
>>><br>
>>> Could you try my settings and report
back?<br>
>>><br>
>>> On Sat, Jan 9, 2016 at 3:14 PM, Quentin
Deldycke<br>
>>> <<a moz-do-not-send="true"
href="mailto:quentindeldycke@gmail.com"
target="_blank">quentindeldycke@gmail.com</a>>
wrote:<br>
>>> > I use virsh:<br>
>>> ><br>
>>> > ===SNIP===<br>
>>> > <vcpu
placement='static'>3</vcpu><br>
>>> > <cputune><br>
>>> > <vcpupin vcpu='0'
cpuset='1'/><br>
>>> > <vcpupin vcpu='1'
cpuset='2'/><br>
>>> > <vcpupin vcpu='2'
cpuset='3'/><br>
>>> > <emulatorpin
cpuset='6-7'/><br>
>>> > </cputune><br>
>>> > ===SNAP===<br>
>>> ><br>
>>> > I have a prepare script running:<br>
>>> ><br>
>>> > ===SNIP===<br>
>>> > sudo mkdir /cpuset<br>
>>> > sudo mount -t cpuset none /cpuset/<br>
>>> > cd /cpuset<br>
>>> > echo 0 | sudo tee -a
cpuset.cpu_exclusive<br>
>>> > echo 0 | sudo tee -a
cpuset.mem_exclusive<br>
>>> ><br>
>>> > sudo mkdir sys<br>
>>> > echo 'Building shield for core
system... threads 0 and 4, and we place<br>
>>> > all<br>
>>> > runnning tasks there'<br>
>>> > /bin/echo 0,4 | sudo tee -a
sys/cpuset.cpus<br>
>>> > /bin/echo 0 | sudo tee -a
sys/cpuset.mems<br>
>>> > /bin/echo 0 | sudo tee -a
sys/cpuset.cpu_exclusive<br>
>>> > /bin/echo 0 | sudo tee -a
sys/cpuset.mem_exclusive<br>
>>> > for T in `cat tasks`; do sudo bash
-c "/bin/echo $T ><br>
>>> > sys/tasks">/dev/null<br>
>>> > 2>&1 ; done<br>
>>> > cd -<br>
>>> > ===SNAP===<br>
>>> ><br>
>>> > Note that i use this command line
for the kernel<br>
>>> > nohz_full=1,2,3,4,5,6,7
rcu_nocbs=1,2,3,4,5,6,7 default_hugepagesz=1G<br>
>>> > hugepagesz=1G hugepages=12<br>
>>> ><br>
>>> ><br>
>>> > --<br>
>>> > Deldycke Quentin<br>
>>> ><br>
>>> ><br>
>>> > On 9 January 2016 at 15:40, rndbit
<<a moz-do-not-send="true"
href="mailto:rndbit@sysret.net" target="_blank">rndbit@sysret.net</a>>
wrote:<br>
>>> >><br>
>>> >> Mind posting actual commands
how you achieved this?<br>
>>> >><br>
>>> >> All im doing now is this:<br>
>>> >><br>
>>> >> cset set -c 0-3 system<br>
>>> >> cset proc -m -f root -t system
-k<br>
>>> >><br>
>>> >> <vcpu
placement='static'>4</vcpu><br>
>>> >> <cputune><br>
>>> >> <vcpupin vcpu='0'
cpuset='4'/><br>
>>> >> <vcpupin vcpu='1'
cpuset='5'/><br>
>>> >> <vcpupin vcpu='2'
cpuset='6'/><br>
>>> >> <vcpupin vcpu='3'
cpuset='7'/><br>
>>> >> <emulatorpin
cpuset='0-3'/><br>
>>> >> </cputune><br>
>>> >><br>
>>> >> Basically this puts most of
threads to 0-3 cores including emulator<br>
>>> >> threads. Some threads cant be
moved though so they remain on 4-7<br>
>>> >> cores. VM<br>
>>> >> is given 4-7 cores. It works
better but there is still much to be<br>
>>> >> desired.<br>
>>> >><br>
>>> >><br>
>>> >><br>
>>> >> On 2016.01.09 15:59, Quentin
Deldycke wrote:<br>
>>> >><br>
>>> >> Hello,<br>
>>> >><br>
>>> >> Using cpuset, i was using the
vm with:<br>
>>> >><br>
>>> >> Core 0: threads 0 & 4:
linux + emulator pin<br>
>>> >> Core 1,2,3: threads
1,2,3,5,6,7: windows<br>
>>> >><br>
>>> >> I tested with:<br>
>>> >> Core 0: threads 0 & 4:
linux<br>
>>> >> Core 1,2,3: threads 1,2,3:
windows<br>
>>> >> Core 1,2,3: threads 5,6,7:
emulator<br>
>>> >><br>
>>> >> The difference between both is
huge (DPC latency is mush more stable):<br>
>>> >> Performance on single core
went up to 50% (cinebench ratio by core<br>
>>> >> from<br>
>>> >> 100 to 150 points)<br>
>>> >> Performance on gpu went up to
20% (cinebench from 80fps to 100+)<br>
>>> >> Performance on "heroes of the
storm" went from 20~30 fps to stable 60<br>
>>> >> (and<br>
>>> >> much time more than 100)<br>
>>> >><br>
>>> >> (performance of Unigine Heaven
went from 2700 points to 3100 points)<br>
>>> >><br>
>>> >> The only sad thing is that i
have the 3 idle threads which are barely<br>
>>> >> used... Is there any way to
put them back to windows?<br>
>>> >><br>
>>> >> --<br>
>>> >> Deldycke Quentin<br>
>>> >><br>
>>> >><br>
>>> >> On 29 December 2015 at 17:38,
Michael Bauer <<a moz-do-not-send="true"
href="mailto:michael@m-bauer.org" target="_blank">michael@m-bauer.org</a>><br>
>>> >> wrote:<br>
>>> >>><br>
>>> >>> I noticed that attaching a
DVD-Drive from the host leads to HUGE<br>
>>> >>> delays.<br>
>>> >>> I had attached my /dev/sr0
to the guest and even without a DVD in the<br>
>>> >>> drive<br>
>>> >>> this was causing huge lag
about once per second.<br>
>>> >>><br>
>>> >>> Best regards<br>
>>> >>> Michael<br>
>>> >>><br>
>>> >>><br>
>>> >>> Am 28.12.2015 um 19:30
schrieb rndbit:<br>
>>> >>><br>
>>> >>> 4000μs-16000μs here, its
terrible.<br>
>>> >>> Tried whats said on<br>
>>> >>> <a moz-do-not-send="true"
href="https://lime-technology.com/forum/index.php?topic=43126.15"
rel="noreferrer" target="_blank">https://lime-technology.com/forum/index.php?topic=43126.15</a><br>
>>> >>> Its a bit better with
this:<br>
>>> >>><br>
>>> >>> <vcpu
placement='static'>4</vcpu><br>
>>> >>> <cputune><br>
>>> >>> <vcpupin vcpu='0'
cpuset='4'/><br>
>>> >>> <vcpupin vcpu='1'
cpuset='5'/><br>
>>> >>> <vcpupin vcpu='2'
cpuset='6'/><br>
>>> >>> <vcpupin vcpu='3'
cpuset='7'/><br>
>>> >>> <emulatorpin
cpuset='0-3'/><br>
>>> >>> </cputune><br>
>>> >>><br>
>>> >>> I tried isolcpus but it
did not yield visible benefits. ndis.sys is<br>
>>> >>> big<br>
>>> >>> offender here but i dont
really understand why. Removing network<br>
>>> >>> interface<br>
>>> >>> from VM makes usbport.sys
take over as biggest offender. All this<br>
>>> >>> happens<br>
>>> >>> with performance governor
of all cpu cores:<br>
>>> >>><br>
>>> >>> echo performance | tee<br>
>>> >>>
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
>/dev/null<br>
>>> >>><br>
>>> >>> Cores remain clocked at 4k
mhz. I dont know what else i could try.<br>
>>> >>> Does<br>
>>> >>> anyone have any ideas..?<br>
>>> >>><br>
>>> >>> On 2015.10.29 08:03, Eddie
Yen wrote:<br>
>>> >>><br>
>>> >>> I tested again with VM
reboot, I found that this time is about<br>
>>> >>> 1000~1500μs.<br>
>>> >>> Also I found that it
easily get high while hard drive is loading, but<br>
>>> >>> only few times.<br>
>>> >>><br>
>>> >>> Which specs you're using?
Maybe it depends on CPU or patches.<br>
>>> >>><br>
>>> >>> 2015-10-29 13:44 GMT+08:00
Blank Field <<a moz-do-not-send="true"
href="mailto:ihatethisfield@gmail.com"
target="_blank">ihatethisfield@gmail.com</a>>:<br>
>>> >>>><br>
>>> >>>> If i understand it
right, this software has a fixed latency error of<br>
>>> >>>> 1<br>
>>> >>>> ms(1000us) in windows
8-10 due to different kernel timer<br>
>>> >>>> implementation. So<br>
>>> >>>> i guess your latency
is very good.<br>
>>> >>>><br>
>>> >>>> On Oct 29, 2015 8:40
AM, "Eddie Yen" <<a moz-do-not-send="true"
href="mailto:missile0407@gmail.com"
target="_blank">missile0407@gmail.com</a>>
wrote:<br>
>>> >>>>><br>
>>> >>>>> Thanks for
information! And sorry I don'r read carefully at<br>
>>> >>>>> beginning<br>
>>> >>>>> message.<br>
>>> >>>>><br>
>>> >>>>> For my result, I
got about 1000μs below and only few times got<br>
>>> >>>>> 1000μs<br>
>>> >>>>> above when idling.<br>
>>> >>>>><br>
>>> >>>>> I'm using 4820K
and used 4 threads to VM, also I set these 4<br>
>>> >>>>> threads<br>
>>> >>>>> as 4 cores in VM
settings.<br>
>>> >>>>> The OS is Windows
10.<br>
>>> >>>>><br>
>>> >>>>> 2015-10-29 13:21
GMT+08:00 Blank Field <<a moz-do-not-send="true"
href="mailto:ihatethisfield@gmail.com"
target="_blank">ihatethisfield@gmail.com</a>>:<br>
>>> >>>>>><br>
>>> >>>>>> I think
they're using this:<br>
>>> >>>>>> <a
moz-do-not-send="true"
href="http://www.thesycon.de/deu/latency_check.shtml"
rel="noreferrer" target="_blank"><a class="moz-txt-link-abbreviated" href="http://www.thesycon.de/deu/latency_check.shtml">www.thesycon.de/deu/latency_check.shtml</a></a><br>
>>> >>>>>><br>
>>> >>>>>> On Oct 29,
2015 6:11 AM, "Eddie Yen" <<a
moz-do-not-send="true"
href="mailto:missile0407@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:missile0407@gmail.com">missile0407@gmail.com</a></a>><br>
>>> >>>>>> wrote:<br>
>>> >>>>>>><br>
>>> >>>>>>> Sorry, but
how to check DPC Latency?<br>
>>> >>>>>>><br>
>>> >>>>>>> 2015-10-29
10:08 GMT+08:00 Nick Sukharev<br>
>>> >>>>>>> <<a
moz-do-not-send="true"
href="mailto:nicksukharev@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:nicksukharev@gmail.com">nicksukharev@gmail.com</a></a>>:<br>
>>> >>>>>>>><br>
>>> >>>>>>>> I just
checked on W7 and I get 3000μs-4000μs one one of the<br>
>>> >>>>>>>> guests<br>
>>> >>>>>>>> when 3
guests are running.<br>
>>> >>>>>>>><br>
>>> >>>>>>>> On
Wed, Oct 28, 2015 at 4:52 AM, Sergey Vlasov<br>
>>> >>>>>>>> <<a
moz-do-not-send="true"
href="mailto:sergey@vlasov.me" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sergey@vlasov.me">sergey@vlasov.me</a></a>><br>
>>> >>>>>>>> wrote:<br>
>>> >>>>>>>>><br>
>>> >>>>>>>>> On
27 October 2015 at 18:38, LordZiru <<a
moz-do-not-send="true"
href="mailto:lordziru@gmail.com" target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:lordziru@gmail.com">lordziru@gmail.com</a></a>><br>
>>> >>>>>>>>>
wrote:<br>
>>>
>>>>>>>>>><br>
>>>
>>>>>>>>>> I have
brutal DPC Latency on qemu, no matter if using<br>
>>>
>>>>>>>>>> pci-assign<br>
>>>
>>>>>>>>>> or vfio-pci
or without any passthrought,<br>
>>>
>>>>>>>>>><br>
>>>
>>>>>>>>>> my DPC
Latency is like:<br>
>>>
>>>>>>>>>>
10000,500,8000,6000,800,300,12000,9000,700,2000,9000<br>
>>>
>>>>>>>>>> and on
native windows 7 is like:<br>
>>>
>>>>>>>>>>
20,30,20,50,20,30,20,20,30<br>
>>> >>>>>>>>><br>
>>> >>>>>>>>><br>
>>> >>>>>>>>> In
Windows 10 guest I constantly have red bars around
3000μs<br>
>>> >>>>>>>>>
(microseconds), spiking sometimes up to 10000μs.<br>
>>> >>>>>>>>><br>
>>>
>>>>>>>>>><br>
>>>
>>>>>>>>>> I don't
know how to fix it.<br>
>>>
>>>>>>>>>> this matter
for me because i are using USB Sound Card for my<br>
>>>
>>>>>>>>>> VMs,<br>
>>>
>>>>>>>>>> and i get
sound drop-outs every 0-4 secounds<br>
>>>
>>>>>>>>>><br>
>>> >>>>>>>>><br>
>>> >>>>>>>>>
That bugs me a lot too. I also use an external USB
card and my<br>
>>> >>>>>>>>>
DAW<br>
>>> >>>>>>>>>
periodically drops out :(<br>
>>> >>>>>>>>><br>
>>> >>>>>>>>> I
haven't tried CPU pinning yet though. And perhaps I
should<br>
>>> >>>>>>>>>
try<br>
>>> >>>>>>>>>
Windows 7.<br>
>>> >>>>>>>>><br>
>>> >>>>>>>>><br>
>>> >>>>>>>>>
_______________________________________________<br>
>>> >>>>>>>>>
vfio-users mailing list<br>
>>> >>>>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>> >>>>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>> >>>>>>>>><br>
>>> >>>>>>>><br>
>>> >>>>>>>><br>
>>> >>>>>>>>
_______________________________________________<br>
>>> >>>>>>>>
vfio-users mailing list<br>
>>> >>>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>> >>>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>> >>>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>><br>
>>> >>>>>>>
_______________________________________________<br>
>>> >>>>>>> vfio-users
mailing list<br>
>>> >>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>> >>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>> >>>>>>><br>
>>> >>>>><br>
>>> >>><br>
>>> >>><br>
>>> >>><br>
>>> >>>
_______________________________________________<br>
>>> >>> vfio-users mailing list<br>
>>> >>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> >>><br>
>>> >>><br>
>>> >>><br>
>>> >>><br>
>>> >>>
_______________________________________________<br>
>>> >>> vfio-users mailing list<br>
>>> >>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> >>><br>
>>> >>><br>
>>> >>><br>
>>> >>>
_______________________________________________<br>
>>> >>> vfio-users mailing list<br>
>>> >>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> >>><br>
>>> >><br>
>>> >><br>
>>> >><br>
>>> >>
_______________________________________________<br>
>>> >> vfio-users mailing list<br>
>>> >> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> >> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> >><br>
>>> >><br>
>>> >><br>
>>> >>
_______________________________________________<br>
>>> >> vfio-users mailing list<br>
>>> >> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> >> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> >><br>
>>> ><br>
>>> ><br>
>>> >
_______________________________________________<br>
>>> > vfio-users mailing list<br>
>>> > <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> > <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>> ><br>
>><br>
>><br>
><br>
><br>
><br>
> _______________________________________________<br>
> vfio-users mailing list<br>
> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> vfio-users mailing list<br>
> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
><br>
<br>
_______________________________________________<br>
vfio-users mailing list<br>
<a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
<a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
_______________________________________________<br>
vfio-users mailing list<br>
<a moz-do-not-send="true" href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
<a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
vfio-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a>
</pre>
</blockquote>
<br>
</body>
</html>