<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Tried Milos' config too - DPC latency got worse. I use AMD cpu
though so its hardly comparable.<br>
One thing to note is that both VM and bare metal (same OS) score
around 5k points in 3dmark fire strike test (VM 300 points less).
Sounds not too bad but in reality bf4 is pretty much unplayable in
VM due to bad performance and sound glitches while playing it on
bare metal is just fine. Again DPC latency on bare metal even under
load is ok - occasional spike here and there but mostly its within
norm. Any kind of load on VM makes DPC go nuts and performance is
terrible. I even tried isolcpus=4,5,6,7 and binding vm to those free
cores - its all the same.<br>
<br>
Interesting observation is that i used to play titanfall without a
hitch in VM some time in the past, 3.10 kernel or so (no patches).
When i get free moment ill try downgrading kernel, maybe problem is
there.<br>
<br>
<div class="moz-cite-prefix">On 2016.01.11 10:39, Quentin Deldycke
wrote:<br>
</div>
<blockquote
cite="mid:CAHYLta4LwoOnxUq+4NW2K2BPMmRq_VK3=dqYcDNd10mcE7PouA@mail.gmail.com"
type="cite">
<div dir="ltr">Also, i juste saw something:
<div><br>
</div>
<div>You use ultra (4k?) settings on a 770gtx. This is too heavy
for it. You have less than 10fps. So in fact if you loose
let's say 10% of performance, you will barely see it.</div>
<div><br>
</div>
<div>What we search is a very high reponse time. Could you
please compare your system with a less heavy benchmark. It is
easier to see the difference at ~50-70 fps.</div>
<div><br>
</div>
<div>In my case, this configuration work. But my fps fluctuate
quite a lot. If you are a bit a serious gamer, this falls are
not an option during game :)</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div class="gmail_signature">
<div dir="ltr">--
<div>Deldycke Quentin<br>
</div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">On 11 January 2016 at 08:54, Quentin
Deldycke <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:quentindeldycke@gmail.com" target="_blank">quentindeldycke@gmail.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Using this mode,
<div><br>
</div>
<div>DPC Latency is hugely buggy using this mode.</div>
<div><br>
</div>
<div>My fps are also moving on an apocaliptic way: from 80
to 45 fps without moving on ungine valley.</div>
<div><br>
</div>
<div>Do you have anything working on your linux? (i have
plasma doing nothing on another screen)</div>
<div><br>
</div>
<div>Ungine heaven went back to 2600 points from 3100</div>
<div>Cinebench r15: single core 124</div>
<div><br>
</div>
<div><br>
</div>
<div>Could you please send your whole xml file, qemu
version and kernel config / boot?</div>
<div><br>
</div>
<div>I will try to get 3dmark and verify host / virtual
comparison</div>
</div>
<div class="gmail_extra"><br clear="all">
<div>
<div>
<div dir="ltr">--
<div>Deldycke Quentin<br>
</div>
<div>
<div><br>
</div>
</div>
</div>
</div>
</div>
<div>
<div class="h5">
<br>
<div class="gmail_quote">On 9 January 2016 at 20:24,
Milos Kaurin <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:milos.kaurin@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:milos.kaurin@gmail.com">milos.kaurin@gmail.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">My
details:<br>
Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz<br>
32GB total ram<br>
hugetables@16x1GB for the guest (didn't have much
to do with 3dmark results)<br>
<br>
I have had the best performance with:<br>
<br>
<vcpu placement='static'>8</vcpu><br>
<cpu mode='custom' match='exact'><br>
<model
fallback='allow'>host-passthrough</model><br>
<topology sockets='1' cores='4'
threads='2'/><br>
</cpu><br>
<br>
No CPU pinning on either guest or host<br>
<br>
Benchmark example (Bare metal Win10 vs Fedora
Guest Win10)<br>
<a moz-do-not-send="true"
href="http://www.3dmark.com/compare/fs/7076732/fs/7076627#"
rel="noreferrer" target="_blank">http://www.3dmark.com/compare/fs/7076732/fs/7076627#</a><br>
<br>
<br>
Could you try my settings and report back?<br>
<div>
<div><br>
On Sat, Jan 9, 2016 at 3:14 PM, Quentin
Deldycke<br>
<<a moz-do-not-send="true"
href="mailto:quentindeldycke@gmail.com"
target="_blank">quentindeldycke@gmail.com</a>>
wrote:<br>
> I use virsh:<br>
><br>
> ===SNIP===<br>
> <vcpu
placement='static'>3</vcpu><br>
> <cputune><br>
> <vcpupin vcpu='0' cpuset='1'/><br>
> <vcpupin vcpu='1' cpuset='2'/><br>
> <vcpupin vcpu='2' cpuset='3'/><br>
> <emulatorpin cpuset='6-7'/><br>
> </cputune><br>
> ===SNAP===<br>
><br>
> I have a prepare script running:<br>
><br>
> ===SNIP===<br>
> sudo mkdir /cpuset<br>
> sudo mount -t cpuset none /cpuset/<br>
> cd /cpuset<br>
> echo 0 | sudo tee -a cpuset.cpu_exclusive<br>
> echo 0 | sudo tee -a cpuset.mem_exclusive<br>
><br>
> sudo mkdir sys<br>
> echo 'Building shield for core system...
threads 0 and 4, and we place all<br>
> runnning tasks there'<br>
> /bin/echo 0,4 | sudo tee -a
sys/cpuset.cpus<br>
> /bin/echo 0 | sudo tee -a sys/cpuset.mems<br>
> /bin/echo 0 | sudo tee -a
sys/cpuset.cpu_exclusive<br>
> /bin/echo 0 | sudo tee -a
sys/cpuset.mem_exclusive<br>
> for T in `cat tasks`; do sudo bash -c
"/bin/echo $T > sys/tasks">/dev/null<br>
> 2>&1 ; done<br>
> cd -<br>
> ===SNAP===<br>
><br>
> Note that i use this command line for the
kernel<br>
> nohz_full=1,2,3,4,5,6,7
rcu_nocbs=1,2,3,4,5,6,7 default_hugepagesz=1G<br>
> hugepagesz=1G hugepages=12<br>
><br>
><br>
> --<br>
> Deldycke Quentin<br>
><br>
><br>
> On 9 January 2016 at 15:40, rndbit <<a
moz-do-not-send="true"
href="mailto:rndbit@sysret.net"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:rndbit@sysret.net">rndbit@sysret.net</a></a>>
wrote:<br>
>><br>
>> Mind posting actual commands how you
achieved this?<br>
>><br>
>> All im doing now is this:<br>
>><br>
>> cset set -c 0-3 system<br>
>> cset proc -m -f root -t system -k<br>
>><br>
>> <vcpu
placement='static'>4</vcpu><br>
>> <cputune><br>
>> <vcpupin vcpu='0'
cpuset='4'/><br>
>> <vcpupin vcpu='1'
cpuset='5'/><br>
>> <vcpupin vcpu='2'
cpuset='6'/><br>
>> <vcpupin vcpu='3'
cpuset='7'/><br>
>> <emulatorpin cpuset='0-3'/><br>
>> </cputune><br>
>><br>
>> Basically this puts most of threads
to 0-3 cores including emulator<br>
>> threads. Some threads cant be moved
though so they remain on 4-7 cores. VM<br>
>> is given 4-7 cores. It works better
but there is still much to be desired.<br>
>><br>
>><br>
>><br>
>> On 2016.01.09 15:59, Quentin Deldycke
wrote:<br>
>><br>
>> Hello,<br>
>><br>
>> Using cpuset, i was using the vm
with:<br>
>><br>
>> Core 0: threads 0 & 4: linux +
emulator pin<br>
>> Core 1,2,3: threads 1,2,3,5,6,7:
windows<br>
>><br>
>> I tested with:<br>
>> Core 0: threads 0 & 4: linux<br>
>> Core 1,2,3: threads 1,2,3: windows<br>
>> Core 1,2,3: threads 5,6,7: emulator<br>
>><br>
>> The difference between both is huge
(DPC latency is mush more stable):<br>
>> Performance on single core went up to
50% (cinebench ratio by core from<br>
>> 100 to 150 points)<br>
>> Performance on gpu went up to 20%
(cinebench from 80fps to 100+)<br>
>> Performance on "heroes of the storm"
went from 20~30 fps to stable 60 (and<br>
>> much time more than 100)<br>
>><br>
>> (performance of Unigine Heaven went
from 2700 points to 3100 points)<br>
>><br>
>> The only sad thing is that i have the
3 idle threads which are barely<br>
>> used... Is there any way to put them
back to windows?<br>
>><br>
>> --<br>
>> Deldycke Quentin<br>
>><br>
>><br>
>> On 29 December 2015 at 17:38, Michael
Bauer <<a moz-do-not-send="true"
href="mailto:michael@m-bauer.org"
target="_blank">michael@m-bauer.org</a>>
wrote:<br>
>>><br>
>>> I noticed that attaching a
DVD-Drive from the host leads to HUGE delays.<br>
>>> I had attached my /dev/sr0 to the
guest and even without a DVD in the drive<br>
>>> this was causing huge lag about
once per second.<br>
>>><br>
>>> Best regards<br>
>>> Michael<br>
>>><br>
>>><br>
>>> Am 28.12.2015 um 19:30 schrieb
rndbit:<br>
>>><br>
>>> 4000μs-16000μs here, its
terrible.<br>
>>> Tried whats said on<br>
>>> <a moz-do-not-send="true"
href="https://lime-technology.com/forum/index.php?topic=43126.15"
rel="noreferrer" target="_blank">https://lime-technology.com/forum/index.php?topic=43126.15</a><br>
>>> Its a bit better with this:<br>
>>><br>
>>> <vcpu
placement='static'>4</vcpu><br>
>>> <cputune><br>
>>> <vcpupin vcpu='0'
cpuset='4'/><br>
>>> <vcpupin vcpu='1'
cpuset='5'/><br>
>>> <vcpupin vcpu='2'
cpuset='6'/><br>
>>> <vcpupin vcpu='3'
cpuset='7'/><br>
>>> <emulatorpin
cpuset='0-3'/><br>
>>> </cputune><br>
>>><br>
>>> I tried isolcpus but it did not
yield visible benefits. ndis.sys is big<br>
>>> offender here but i dont really
understand why. Removing network interface<br>
>>> from VM makes usbport.sys take
over as biggest offender. All this happens<br>
>>> with performance governor of all
cpu cores:<br>
>>><br>
>>> echo performance | tee<br>
>>>
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
>/dev/null<br>
>>><br>
>>> Cores remain clocked at 4k mhz. I
dont know what else i could try. Does<br>
>>> anyone have any ideas..?<br>
>>><br>
>>> On 2015.10.29 08:03, Eddie Yen
wrote:<br>
>>><br>
>>> I tested again with VM reboot, I
found that this time is about<br>
>>> 1000~1500μs.<br>
>>> Also I found that it easily get
high while hard drive is loading, but<br>
>>> only few times.<br>
>>><br>
>>> Which specs you're using? Maybe
it depends on CPU or patches.<br>
>>><br>
>>> 2015-10-29 13:44 GMT+08:00 Blank
Field <<a moz-do-not-send="true"
href="mailto:ihatethisfield@gmail.com"
target="_blank">ihatethisfield@gmail.com</a>>:<br>
>>>><br>
>>>> If i understand it right,
this software has a fixed latency error of 1<br>
>>>> ms(1000us) in windows 8-10
due to different kernel timer implementation.
So<br>
>>>> i guess your latency is very
good.<br>
>>>><br>
>>>> On Oct 29, 2015 8:40 AM,
"Eddie Yen" <<a moz-do-not-send="true"
href="mailto:missile0407@gmail.com"
target="_blank">missile0407@gmail.com</a>>
wrote:<br>
>>>>><br>
>>>>> Thanks for information!
And sorry I don'r read carefully at beginning<br>
>>>>> message.<br>
>>>>><br>
>>>>> For my result, I got
about 1000μs below and only few times got
1000μs<br>
>>>>> above when idling.<br>
>>>>><br>
>>>>> I'm using 4820K and used
4 threads to VM, also I set these 4 threads<br>
>>>>> as 4 cores in VM
settings.<br>
>>>>> The OS is Windows 10.<br>
>>>>><br>
>>>>> 2015-10-29 13:21
GMT+08:00 Blank Field <<a
moz-do-not-send="true"
href="mailto:ihatethisfield@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:ihatethisfield@gmail.com">ihatethisfield@gmail.com</a></a>>:<br>
>>>>>><br>
>>>>>> I think they're using
this:<br>
>>>>>> <a
moz-do-not-send="true"
href="http://www.thesycon.de/deu/latency_check.shtml"
rel="noreferrer" target="_blank"><a class="moz-txt-link-abbreviated" href="http://www.thesycon.de/deu/latency_check.shtml">www.thesycon.de/deu/latency_check.shtml</a></a><br>
>>>>>><br>
>>>>>> On Oct 29, 2015 6:11
AM, "Eddie Yen" <<a moz-do-not-send="true"
href="mailto:missile0407@gmail.com"
target="_blank">missile0407@gmail.com</a>>
wrote:<br>
>>>>>>><br>
>>>>>>> Sorry, but how to
check DPC Latency?<br>
>>>>>>><br>
>>>>>>> 2015-10-29 10:08
GMT+08:00 Nick Sukharev <<a
moz-do-not-send="true"
href="mailto:nicksukharev@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:nicksukharev@gmail.com">nicksukharev@gmail.com</a></a>>:<br>
>>>>>>>><br>
>>>>>>>> I just
checked on W7 and I get 3000μs-4000μs one one
of the guests<br>
>>>>>>>> when 3 guests
are running.<br>
>>>>>>>><br>
>>>>>>>> On Wed, Oct
28, 2015 at 4:52 AM, Sergey Vlasov <<a
moz-do-not-send="true"
href="mailto:sergey@vlasov.me"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:sergey@vlasov.me">sergey@vlasov.me</a></a>><br>
>>>>>>>> wrote:<br>
>>>>>>>>><br>
>>>>>>>>> On 27
October 2015 at 18:38, LordZiru <<a
moz-do-not-send="true"
href="mailto:lordziru@gmail.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:lordziru@gmail.com">lordziru@gmail.com</a></a>>
wrote:<br>
>>>>>>>>>><br>
>>>>>>>>>> I
have brutal DPC Latency on qemu, no matter if
using pci-assign<br>
>>>>>>>>>> or
vfio-pci or without any passthrought,<br>
>>>>>>>>>><br>
>>>>>>>>>> my
DPC Latency is like:<br>
>>>>>>>>>>
10000,500,8000,6000,800,300,12000,9000,700,2000,9000<br>
>>>>>>>>>> and
on native windows 7 is like:<br>
>>>>>>>>>>
20,30,20,50,20,30,20,20,30<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> In
Windows 10 guest I constantly have red bars
around 3000μs<br>
>>>>>>>>>
(microseconds), spiking sometimes up to
10000μs.<br>
>>>>>>>>><br>
>>>>>>>>>><br>
>>>>>>>>>> I
don't know how to fix it.<br>
>>>>>>>>>> this
matter for me because i are using USB Sound
Card for my VMs,<br>
>>>>>>>>>> and i
get sound drop-outs every 0-4 secounds<br>
>>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>> That bugs
me a lot too. I also use an external USB card
and my DAW<br>
>>>>>>>>>
periodically drops out :(<br>
>>>>>>>>><br>
>>>>>>>>> I haven't
tried CPU pinning yet though. And perhaps I
should try<br>
>>>>>>>>> Windows
7.<br>
>>>>>>>>><br>
>>>>>>>>><br>
>>>>>>>>>
_______________________________________________<br>
>>>>>>>>>
vfio-users mailing list<br>
>>>>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>>>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>>>>>>>><br>
>>>>>>>><br>
>>>>>>>><br>
>>>>>>>>
_______________________________________________<br>
>>>>>>>> vfio-users
mailing list<br>
>>>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>>>>>>><br>
>>>>>>><br>
>>>>>>><br>
>>>>>>>
_______________________________________________<br>
>>>>>>> vfio-users
mailing list<br>
>>>>>>> <a
moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
>>>>>>> <a
moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
>>>>>>><br>
>>>>><br>
>>><br>
>>><br>
>>><br>
>>>
_______________________________________________<br>
>>> vfio-users mailing list<br>
>>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>>
_______________________________________________<br>
>>> vfio-users mailing list<br>
>>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>><br>
>>><br>
>>><br>
>>>
_______________________________________________<br>
>>> vfio-users mailing list<br>
>>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>>><br>
>><br>
>><br>
>><br>
>>
_______________________________________________<br>
>> vfio-users mailing list<br>
>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>><br>
>><br>
>><br>
>>
_______________________________________________<br>
>> vfio-users mailing list<br>
>> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
>><br>
><br>
><br>
>
_______________________________________________<br>
> vfio-users mailing list<br>
> <a moz-do-not-send="true"
href="mailto:vfio-users@redhat.com"
target="_blank">vfio-users@redhat.com</a><br>
> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
vfio-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a>
<a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a>
</pre>
</blockquote>
<br>
</body>
</html>