<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    This indeed may be <b>kernel regression</b>. I tested 3.10, 3.12,
    and 4.1 kernels. Valley benchmark gives 1500~ points on 3.10 while
    3.12 and 4.1 get only 1300~ - a 200 point difference. With 3.10
    kernel i also have better DPC, more yellow bars than with any other
    kernel. However it also makes things act weird. For example
    sometimes after boot it takes a while until UAC prompt starts to
    work. It freezes calling application for a looong time before it
    starts working. Can take 10 minutes or so. But after some time its
    ok. I wanted to try bf4 online but it would never load level (always
    silently crashes at some point) while single player worked. 3.12 and
    4.1 kernels had no problem loading online session. Also it often
    freezes host on guest shutdown. All tested kernel versions are stock
    kernels without any patches (basically whatever comes from aur or
    archlinux repos).<br>
    <br>
    All this with following config:<br>
    <blockquote type="cite">  <vcpu
      placement='static'>4</vcpu><br>
        <cputune><br>
          <vcpupin vcpu='0' cpuset='4'/><br>
          <vcpupin vcpu='1' cpuset='5'/><br>
          <vcpupin vcpu='2' cpuset='6'/><br>
          <vcpupin vcpu='3' cpuset='7'/><br>
          <emulatorpin cpuset='0-3'/><br>
        </cputune></blockquote>
    <blockquote type="cite">  <cpu mode='host-model'><br>
          <model fallback='allow'/><br>
          <topology sockets='1' cores='4' threads='1'/><br>
        </cpu></blockquote>
    <blockquote type="cite">cset set -c 0-3 system > /dev/null</blockquote>
    <blockquote type="cite">cset proc -m -f root -t system -k</blockquote>
    <blockquote type="cite">nohz_full=2,3,4,5,6,7 rcu_nocbs=2,3,4,5,6,7</blockquote>
    So we have a fast kernel that is glitchy and slower kernels that
    work reasonably well. What a luck..<br>
    <br>
    <div class="moz-cite-prefix">On 2016.01.11 17:08, Quentin Deldycke
      wrote:<br>
    </div>
    <blockquote
cite="mid:CAHYLta7GjsoW4zGouNdZB38FVJtQ9bj8LRC-f0GAzGfh6ZskpA@mail.gmail.com"
      type="cite">
      <div dir="ltr">Hello,
        <div><br>
        </div>
        <div>I use intel cpu (i7 4790k). But yes i have an R9 290 as
          gpu.</div>
        <div>I try to offload to core 0, so in fact i can keep threads 0
          and 4 for linux</div>
        <div>The rest of your resume is right.</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>I use the same program for dpc check. Is also available:
          latencymon.</div>
        <div>But i find dpclat more interesting.</div>
        <div><a moz-do-not-send="true"
            href="http://www.resplendence.com/latencymon"
            target="_blank">http://www.resplendence.com/latencymon</a><br>
        </div>
        <div><br>
        </div>
        <div>Script for moving all threads to 0,4:<br>
        </div>
        <div><a moz-do-not-send="true"
            href="https://github.com/qdel/scripts/blob/master/vfio/shieldbuild">https://github.com/qdel/scripts/blob/master/vfio/shieldbuild<br>
          </a></div>
        <div><br>
        </div>
        <div>XML file:</div>
        <div><a moz-do-not-send="true"
            href="https://github.com/qdel/scripts/blob/master/vfio/win10.xml">https://github.com/qdel/scripts/blob/master/vfio/win10.xml<br>
          </a></div>
        <div><br>
        </div>
        <div>Kernel command line:</div>
        <div>intel_iommu=on iommu=pt
          vfio_iommu_type1.allow_unsafe_interrupts=1</div>
        <div>kvm.ignore_msrs=1 drm.rnodes=1 i915.modeset=1</div>
        <div>nohz_full=1,2,3,4,5,6,7 rcu_nocbs=1,2,3,4,5,6,7</div>
        <div>default_hugepagesz=1G hugepagesz=1G hugepages=12</div>
        <div><br>
        </div>
        <div><br>
        </div>
        <div>Notes about my setup:</div>
        <div>* I have 3 monitors. All are connected to intel</div>
        <div>  2 of them have a input in AMD. With xrandr</div>
        <div>  i can disable these screens and they switch source</div>
        <div>  (at least one, the other is buggy, most times i need to
          push the source button).</div>
        <div>* I pass also a NVMe drive (this thing is actually
          BRUTAL!!!)</div>
        <div>  - I can boot the same drive on native!</div>
        <div>* I pass my second network card</div>
        <div>* I pass one of my sata controller (i have ntfs drives
          there)</div>
        <div>* I pass usb devices and not the whole controller</div>
        <div>  - with the little udev script i plug new devices to the
          vm if this one is started</div>
        <div>* Sound is output by HDMI and back into line in of pc. I
          can use the line in control to</div>
        <div>  modify the sound of the whole vm. Work perfectly
          actually.</div>
        <div>* Best thing for this is dual monitor + synergy :)</div>
        <div><br>
        </div>
      </div>
      <div class="gmail_extra"><br clear="all">
        <div>
          <div class="gmail_signature">
            <div dir="ltr">--
              <div>Deldycke Quentin<br>
              </div>
              <div>
                <div><br>
                </div>
              </div>
            </div>
          </div>
        </div>
        <br>
        <div class="gmail_quote">On 11 January 2016 at 15:06, Milos
          Kaurin <span dir="ltr"><<a moz-do-not-send="true"
              href="mailto:milos.kaurin@gmail.com" target="_blank">milos.kaurin@gmail.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0
            .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
            <br>
            Yes, I have a corei7.<br>
            <br>
            I have to admit that seeing Quentin's e-mail was the first I
            found out<br>
            about DPC latency. I'm taking a strictly empirical approach
            for now,<br>
            but I'd like to dive deeper into this, at least to provide a
            reference<br>
            point for you guys.<br>
            Reason for this being is that even though I'm familiar with
            Linux, I'm<br>
            don't have low-level familiarity as you guys have (other
            than<br>
            conceptual). I'm more than willing to learn given the
            opportunity,<br>
            though.<br>
            <br>
            Quentin:<br>
            >From what I understand about your use:<br>
            * You have an AMD CPU<br>
            * In your kernel parameters, you are trying to offload your<br>
            scheduling-clock interrupts to only thread(core?) 0.<br>
            * Your script sets kernel memory management, future tasks
            and current<br>
            tasks to be run at thread 0<br>
            * Valley bench seems to be most sensitive to DPC latency
            issues (as<br>
            well as "Heroes of the storm")<br>
            * Pinning only 3 cores to the VM gives you best results, but
            seeing<br>
            that newer games take advantage of multiple cores, you'd
            like to have<br>
            an option to use more cores for winVirt<br>
            <br>
            What I'd like from you:<br>
            * Can you provide me with the optimal (3core -> VM)
            settings,<br>
            including kernel parameters, your updated script and the XML
            of your<br>
            virt in this mode of use.<br>
            * Can you provide me with a method how to keep track of DPC
            latency? I<br>
            found this: <a moz-do-not-send="true"
              href="http://www.thesycon.de/deu/latency_check.shtml"
              rel="noreferrer" target="_blank">http://www.thesycon.de/deu/latency_check.shtml</a>
            , but I'd<br>
            like us to use the same method.<br>
            <br>
            Why I'm asking all of this:<br>
            Just ran valley (HD extreme). These are the results:<br>
            <br>
             *Bare-metal:<br>
            FPS: 48.7<br>
            Score: 2036<br>
            Min FPS: 23.4<br>
            Max FPS:90.6<br>
            <br>
            * hugetables, nopin, 1x4x2, host-passthrough:<br>
            FPS: 47.9<br>
            Score: 2005<br>
            Min FPS: 19.7<br>
            Max FPS: 91.5<br>
            <br>
            The score is ~1.5 % worse in the virt.<br>
            The min FPS difference (which looks significant) might be
            negligible<br>
            because I'm running Firefox in the host with a bunch of tabs
            open<br>
            (idle, though)<br>
            <br>
            I have also been playing "Rocket League" in the virt which
            is a very<br>
            twitchy game, and I play it on an experienced level. I did
            not find<br>
            any problems with playing the game like this.<br>
            <br>
            My current XML: <a moz-do-not-send="true"
              href="https://gist.github.com/Kaurin/0b6726e8a94084bd0b64"
              rel="noreferrer" target="_blank">https://gist.github.com/Kaurin/0b6726e8a94084bd0b64</a><br>
            PCI devices passed through: nvidia+HDMI audio, onboard
            sound, onboard<br>
            XHCI USB controller<br>
            <br>
            Notes about my setup:<br>
            * Both virt and host are hooked up to the same monitor
            (host-VGA / virt - DVI).<br>
            * I also don't have any additional USB controllers, which
            means that<br>
            when I turn on the virt, I lose my usb(mouse,keyboard) on
            the host<br>
            * Same goes for sound: when I turn on the virt, I lose sound
            in the host<br>
            * I just flip the monitor input and I'm good to go.<br>
            * I have plans to set up new hardware so I can use both
            host/virt at<br>
            the same time<br>
            <br>
            Let me know if my further input would be useful.<br>
            <br>
            Regards,<br>
            Milos<br>
            <div class="HOEnZb">
              <div class="h5"><br>
                <br>
                <br>
                On Mon, Jan 11, 2016 at 9:19 AM, Quentin Deldycke<br>
                <<a moz-do-not-send="true"
                  href="mailto:quentindeldycke@gmail.com">quentindeldycke@gmail.com</a>>
                wrote:<br>
                > In fact, some games react quite well to this
                latency. Fallout for example<br>
                > doesn't show much difference between host - vm with
                brutal DPC and vm with<br>
                > "good dpc".<br>
                ><br>
                > I tested 3 modes:<br>
                ><br>
                > - all 8 core to vm without pinning: brutal dpc, did
                not tried to play games<br>
                > on it. Only ungine valley => 2600 points<br>
                > - 6 cores pinned to the vm + emulator on core 0,1:
                correct latency. Most<br>
                > games work flawlessly (bf4 / battlefront / diablo
                III) but some are<br>
                > catastrophic: Heroes of the storm. valley =>
                2700<br>
                > - 3 cores pinned to vm: Perfect latency, all games
                work ok. But i am affraid<br>
                > 3 cores are a bit 'not enough" for incoming games.
                valley => 3100 points<br>
                ><br>
                > I think that valley is  a good benchmark. It is
                free and small. It seems to<br>
                > be affected by this latency problem like most
                games.<br>
                ><br>
                ><br>
                ><br>
                ><br>
                > --<br>
                > Deldycke Quentin<br>
                ><br>
                ><br>
                > On 11 January 2016 at 09:59, rndbit <<a
                  moz-do-not-send="true" href="mailto:rndbit@sysret.net"><a class="moz-txt-link-abbreviated" href="mailto:rndbit@sysret.net">rndbit@sysret.net</a></a>>
                wrote:<br>
                >><br>
                >> Tried Milos' config too - DPC latency got
                worse. I use AMD cpu though so<br>
                >> its hardly comparable.<br>
                >> One thing to note is that both VM and bare
                metal (same OS) score around 5k<br>
                >> points in 3dmark fire strike test (VM 300
                points less). Sounds not too bad<br>
                >> but in reality bf4 is pretty much unplayable in
                VM due to bad performance<br>
                >> and sound glitches while playing it on bare
                metal is just fine. Again DPC<br>
                >> latency on bare metal even under load is ok -
                occasional spike here and<br>
                >> there but mostly its within norm. Any kind of
                load on VM makes DPC go nuts<br>
                >> and performance is terrible. I even tried
                isolcpus=4,5,6,7 and binding vm to<br>
                >> those free cores - its all the same.<br>
                >><br>
                >> Interesting observation is that i used to play
                titanfall without a hitch<br>
                >> in VM some time in the past, 3.10 kernel or so
                (no patches). When i get free<br>
                >> moment ill try downgrading kernel, maybe
                problem is there.<br>
                >><br>
                >><br>
                >> On 2016.01.11 10:39, Quentin Deldycke wrote:<br>
                >><br>
                >> Also, i juste saw something:<br>
                >><br>
                >> You use ultra (4k?) settings on a 770gtx. This
                is too heavy for it. You<br>
                >> have less than 10fps. So in fact if you loose
                let's say 10% of performance,<br>
                >> you will barely see it.<br>
                >><br>
                >> What we search is a very high reponse time.
                Could you please compare your<br>
                >> system with a less heavy benchmark. It is
                easier to see the difference at<br>
                >> ~50-70 fps.<br>
                >><br>
                >> In my case, this configuration work. But my fps
                fluctuate quite a lot. If<br>
                >> you are a bit a serious gamer, this falls are
                not an option during game :)<br>
                >><br>
                >> --<br>
                >> Deldycke Quentin<br>
                >><br>
                >><br>
                >> On 11 January 2016 at 08:54, Quentin Deldycke
                <<a moz-do-not-send="true"
                  href="mailto:quentindeldycke@gmail.com">quentindeldycke@gmail.com</a>><br>
                >> wrote:<br>
                >>><br>
                >>> Using this mode,<br>
                >>><br>
                >>> DPC Latency is hugely buggy using this
                mode.<br>
                >>><br>
                >>> My fps are also moving on an apocaliptic
                way: from 80 to 45 fps without<br>
                >>> moving on ungine valley.<br>
                >>><br>
                >>> Do you have anything working on your linux?
                (i have plasma doing nothing<br>
                >>> on another screen)<br>
                >>><br>
                >>> Ungine heaven went back to 2600 points from
                3100<br>
                >>> Cinebench r15: single core 124<br>
                >>><br>
                >>><br>
                >>> Could you please send your whole xml file,
                qemu version and kernel config<br>
                >>> / boot?<br>
                >>><br>
                >>> I will try to get 3dmark and verify host /
                virtual comparison<br>
                >>><br>
                >>> --<br>
                >>> Deldycke Quentin<br>
                >>><br>
                >>><br>
                >>> On 9 January 2016 at 20:24, Milos Kaurin
                <<a moz-do-not-send="true"
                  href="mailto:milos.kaurin@gmail.com">milos.kaurin@gmail.com</a>>
                wrote:<br>
                >>>><br>
                >>>> My details:<br>
                >>>> Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz<br>
                >>>> 32GB total ram<br>
                >>>> hugetables@16x1GB for the guest (didn't
                have much to do with 3dmark<br>
                >>>> results)<br>
                >>>><br>
                >>>> I have had the best performance with:<br>
                >>>><br>
                >>>>   <vcpu
                placement='static'>8</vcpu><br>
                >>>>   <cpu mode='custom'
                match='exact'><br>
                >>>>     <model
                fallback='allow'>host-passthrough</model><br>
                >>>>     <topology sockets='1' cores='4'
                threads='2'/><br>
                >>>>   </cpu><br>
                >>>><br>
                >>>> No CPU pinning on either guest or host<br>
                >>>><br>
                >>>> Benchmark example (Bare metal Win10 vs
                Fedora Guest Win10)<br>
                >>>> <a moz-do-not-send="true"
                  href="http://www.3dmark.com/compare/fs/7076732/fs/7076627#"
                  rel="noreferrer" target="_blank">http://www.3dmark.com/compare/fs/7076732/fs/7076627#</a><br>
                >>>><br>
                >>>><br>
                >>>> Could you try my settings and report
                back?<br>
                >>>><br>
                >>>> On Sat, Jan 9, 2016 at 3:14 PM, Quentin
                Deldycke<br>
                >>>> <<a moz-do-not-send="true"
                  href="mailto:quentindeldycke@gmail.com">quentindeldycke@gmail.com</a>>
                wrote:<br>
                >>>> > I use virsh:<br>
                >>>> ><br>
                >>>> > ===SNIP===<br>
                >>>> >   <vcpu
                placement='static'>3</vcpu><br>
                >>>> >   <cputune><br>
                >>>> >     <vcpupin vcpu='0'
                cpuset='1'/><br>
                >>>> >     <vcpupin vcpu='1'
                cpuset='2'/><br>
                >>>> >     <vcpupin vcpu='2'
                cpuset='3'/><br>
                >>>> >     <emulatorpin
                cpuset='6-7'/><br>
                >>>> >   </cputune><br>
                >>>> > ===SNAP===<br>
                >>>> ><br>
                >>>> > I have a prepare script running:<br>
                >>>> ><br>
                >>>> > ===SNIP===<br>
                >>>> > sudo mkdir /cpuset<br>
                >>>> > sudo mount -t cpuset none /cpuset/<br>
                >>>> > cd /cpuset<br>
                >>>> > echo 0 | sudo tee -a
                cpuset.cpu_exclusive<br>
                >>>> > echo 0 | sudo tee -a
                cpuset.mem_exclusive<br>
                >>>> ><br>
                >>>> > sudo mkdir sys<br>
                >>>> > echo 'Building shield for core
                system... threads 0 and 4, and we place<br>
                >>>> > all<br>
                >>>> > runnning tasks there'<br>
                >>>> > /bin/echo 0,4 | sudo tee -a
                sys/cpuset.cpus<br>
                >>>> > /bin/echo 0 | sudo tee -a
                sys/cpuset.mems<br>
                >>>> > /bin/echo 0 | sudo tee -a
                sys/cpuset.cpu_exclusive<br>
                >>>> > /bin/echo 0 | sudo tee -a
                sys/cpuset.mem_exclusive<br>
                >>>> > for T in `cat tasks`; do sudo bash
                -c "/bin/echo $T ><br>
                >>>> > sys/tasks">/dev/null<br>
                >>>> > 2>&1 ; done<br>
                >>>> > cd -<br>
                >>>> > ===SNAP===<br>
                >>>> ><br>
                >>>> > Note that i use this command line
                for the kernel<br>
                >>>> > nohz_full=1,2,3,4,5,6,7
                rcu_nocbs=1,2,3,4,5,6,7 default_hugepagesz=1G<br>
                >>>> > hugepagesz=1G hugepages=12<br>
                >>>> ><br>
                >>>> ><br>
                >>>> > --<br>
                >>>> > Deldycke Quentin<br>
                >>>> ><br>
                >>>> ><br>
                >>>> > On 9 January 2016 at 15:40, rndbit
                <<a moz-do-not-send="true"
                  href="mailto:rndbit@sysret.net">rndbit@sysret.net</a>>
                wrote:<br>
                >>>> >><br>
                >>>> >> Mind posting actual commands
                how you achieved this?<br>
                >>>> >><br>
                >>>> >> All im doing now is this:<br>
                >>>> >><br>
                >>>> >> cset set -c 0-3 system<br>
                >>>> >> cset proc -m -f root -t system
                -k<br>
                >>>> >><br>
                >>>> >>   <vcpu
                placement='static'>4</vcpu><br>
                >>>> >>   <cputune><br>
                >>>> >>     <vcpupin vcpu='0'
                cpuset='4'/><br>
                >>>> >>     <vcpupin vcpu='1'
                cpuset='5'/><br>
                >>>> >>     <vcpupin vcpu='2'
                cpuset='6'/><br>
                >>>> >>     <vcpupin vcpu='3'
                cpuset='7'/><br>
                >>>> >>     <emulatorpin
                cpuset='0-3'/><br>
                >>>> >>   </cputune><br>
                >>>> >><br>
                >>>> >> Basically this puts most of
                threads to 0-3 cores including emulator<br>
                >>>> >> threads. Some threads cant be
                moved though so they remain on 4-7<br>
                >>>> >> cores. VM<br>
                >>>> >> is given 4-7 cores. It works
                better but there is still much to be<br>
                >>>> >> desired.<br>
                >>>> >><br>
                >>>> >><br>
                >>>> >><br>
                >>>> >> On 2016.01.09 15:59, Quentin
                Deldycke wrote:<br>
                >>>> >><br>
                >>>> >> Hello,<br>
                >>>> >><br>
                >>>> >> Using cpuset, i was using the
                vm with:<br>
                >>>> >><br>
                >>>> >> Core 0: threads 0 & 4:
                linux + emulator pin<br>
                >>>> >> Core 1,2,3: threads
                1,2,3,5,6,7: windows<br>
                >>>> >><br>
                >>>> >> I tested with:<br>
                >>>> >> Core 0: threads 0 & 4:
                linux<br>
                >>>> >> Core 1,2,3: threads 1,2,3:
                windows<br>
                >>>> >> Core 1,2,3: threads 5,6,7:
                emulator<br>
                >>>> >><br>
                >>>> >> The difference between both is
                huge (DPC latency is mush more<br>
                >>>> >> stable):<br>
                >>>> >> Performance on single core
                went up to 50% (cinebench ratio by core<br>
                >>>> >> from<br>
                >>>> >> 100 to 150 points)<br>
                >>>> >> Performance on gpu went up to
                20% (cinebench from 80fps to 100+)<br>
                >>>> >> Performance on "heroes of the
                storm" went from 20~30 fps to stable 60<br>
                >>>> >> (and<br>
                >>>> >> much time more than 100)<br>
                >>>> >><br>
                >>>> >> (performance of Unigine Heaven
                went from 2700 points to 3100 points)<br>
                >>>> >><br>
                >>>> >> The only sad thing is that i
                have the 3 idle threads which are barely<br>
                >>>> >> used... Is there any way to
                put them back to windows?<br>
                >>>> >><br>
                >>>> >> --<br>
                >>>> >> Deldycke Quentin<br>
                >>>> >><br>
                >>>> >><br>
                >>>> >> On 29 December 2015 at 17:38,
                Michael Bauer <<a moz-do-not-send="true"
                  href="mailto:michael@m-bauer.org">michael@m-bauer.org</a>><br>
                >>>> >> wrote:<br>
                >>>> >>><br>
                >>>> >>> I noticed that attaching a
                DVD-Drive from the host leads to HUGE<br>
                >>>> >>> delays.<br>
                >>>> >>> I had attached my /dev/sr0
                to the guest and even without a DVD in<br>
                >>>> >>> the drive<br>
                >>>> >>> this was causing huge lag
                about once per second.<br>
                >>>> >>><br>
                >>>> >>> Best regards<br>
                >>>> >>> Michael<br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>> Am 28.12.2015 um 19:30
                schrieb rndbit:<br>
                >>>> >>><br>
                >>>> >>> 4000μs-16000μs here, its
                terrible.<br>
                >>>> >>> Tried whats said on<br>
                >>>> >>> <a moz-do-not-send="true"
href="https://lime-technology.com/forum/index.php?topic=43126.15"
                  rel="noreferrer" target="_blank">https://lime-technology.com/forum/index.php?topic=43126.15</a><br>
                >>>> >>> Its a bit better with
                this:<br>
                >>>> >>><br>
                >>>> >>>   <vcpu
                placement='static'>4</vcpu><br>
                >>>> >>>   <cputune><br>
                >>>> >>>     <vcpupin vcpu='0'
                cpuset='4'/><br>
                >>>> >>>     <vcpupin vcpu='1'
                cpuset='5'/><br>
                >>>> >>>     <vcpupin vcpu='2'
                cpuset='6'/><br>
                >>>> >>>     <vcpupin vcpu='3'
                cpuset='7'/><br>
                >>>> >>>     <emulatorpin
                cpuset='0-3'/><br>
                >>>> >>>   </cputune><br>
                >>>> >>><br>
                >>>> >>> I tried isolcpus but it
                did not yield visible benefits. ndis.sys is<br>
                >>>> >>> big<br>
                >>>> >>> offender here but i dont
                really understand why. Removing network<br>
                >>>> >>> interface<br>
                >>>> >>> from VM makes usbport.sys
                take over as biggest offender. All this<br>
                >>>> >>> happens<br>
                >>>> >>> with performance governor
                of all cpu cores:<br>
                >>>> >>><br>
                >>>> >>> echo performance | tee<br>
                >>>> >>>
                /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
                >/dev/null<br>
                >>>> >>><br>
                >>>> >>> Cores remain clocked at 4k
                mhz. I dont know what else i could try.<br>
                >>>> >>> Does<br>
                >>>> >>> anyone have any ideas..?<br>
                >>>> >>><br>
                >>>> >>> On 2015.10.29 08:03, Eddie
                Yen wrote:<br>
                >>>> >>><br>
                >>>> >>> I tested again with VM
                reboot, I found that this time is about<br>
                >>>> >>> 1000~1500μs.<br>
                >>>> >>> Also I found that it
                easily get high while hard drive is loading,<br>
                >>>> >>> but<br>
                >>>> >>> only few times.<br>
                >>>> >>><br>
                >>>> >>> Which specs you're using?
                Maybe it depends on CPU or patches.<br>
                >>>> >>><br>
                >>>> >>> 2015-10-29 13:44 GMT+08:00
                Blank Field <<a moz-do-not-send="true"
                  href="mailto:ihatethisfield@gmail.com">ihatethisfield@gmail.com</a>>:<br>
                >>>> >>>><br>
                >>>> >>>> If i understand it
                right, this software has a fixed latency error<br>
                >>>> >>>> of 1<br>
                >>>> >>>> ms(1000us) in windows
                8-10 due to different kernel timer<br>
                >>>> >>>> implementation. So<br>
                >>>> >>>> i guess your latency
                is very good.<br>
                >>>> >>>><br>
                >>>> >>>> On Oct 29, 2015 8:40
                AM, "Eddie Yen" <<a moz-do-not-send="true"
                  href="mailto:missile0407@gmail.com">missile0407@gmail.com</a>>
                wrote:<br>
                >>>> >>>>><br>
                >>>> >>>>> Thanks for
                information! And sorry I don'r read carefully at<br>
                >>>> >>>>> beginning<br>
                >>>> >>>>> message.<br>
                >>>> >>>>><br>
                >>>> >>>>> For my result, I
                got about 1000μs below and only few times got<br>
                >>>> >>>>> 1000μs<br>
                >>>> >>>>> above when idling.<br>
                >>>> >>>>><br>
                >>>> >>>>> I'm using 4820K
                and used 4 threads to VM, also  I set these 4<br>
                >>>> >>>>> threads<br>
                >>>> >>>>> as 4 cores in VM
                settings.<br>
                >>>> >>>>> The OS is Windows
                10.<br>
                >>>> >>>>><br>
                >>>> >>>>> 2015-10-29 13:21
                GMT+08:00 Blank Field <<a moz-do-not-send="true"
                  href="mailto:ihatethisfield@gmail.com">ihatethisfield@gmail.com</a>>:<br>
                >>>> >>>>>><br>
                >>>> >>>>>> I think
                they're using this:<br>
                >>>> >>>>>> <a
                  moz-do-not-send="true"
                  href="http://www.thesycon.de/deu/latency_check.shtml"
                  rel="noreferrer" target="_blank"><a class="moz-txt-link-abbreviated" href="http://www.thesycon.de/deu/latency_check.shtml">www.thesycon.de/deu/latency_check.shtml</a></a><br>
                >>>> >>>>>><br>
                >>>> >>>>>> On Oct 29,
                2015 6:11 AM, "Eddie Yen" <<a moz-do-not-send="true"
                  href="mailto:missile0407@gmail.com">missile0407@gmail.com</a>><br>
                >>>> >>>>>> wrote:<br>
                >>>> >>>>>>><br>
                >>>> >>>>>>> Sorry, but
                how to check DPC Latency?<br>
                >>>> >>>>>>><br>
                >>>> >>>>>>> 2015-10-29
                10:08 GMT+08:00 Nick Sukharev<br>
                >>>> >>>>>>> <<a
                  moz-do-not-send="true"
                  href="mailto:nicksukharev@gmail.com"><a class="moz-txt-link-abbreviated" href="mailto:nicksukharev@gmail.com">nicksukharev@gmail.com</a></a>>:<br>
                >>>> >>>>>>>><br>
                >>>> >>>>>>>> I just
                checked on W7 and I get 3000μs-4000μs one one of the<br>
                >>>> >>>>>>>> guests<br>
                >>>> >>>>>>>> when 3
                guests are running.<br>
                >>>> >>>>>>>><br>
                >>>> >>>>>>>> On
                Wed, Oct 28, 2015 at 4:52 AM, Sergey Vlasov<br>
                >>>> >>>>>>>> <<a
                  moz-do-not-send="true" href="mailto:sergey@vlasov.me"><a class="moz-txt-link-abbreviated" href="mailto:sergey@vlasov.me">sergey@vlasov.me</a></a>><br>
                >>>> >>>>>>>> wrote:<br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>> On
                27 October 2015 at 18:38, LordZiru <<a
                  moz-do-not-send="true"
                  href="mailto:lordziru@gmail.com"><a class="moz-txt-link-abbreviated" href="mailto:lordziru@gmail.com">lordziru@gmail.com</a></a>><br>
                >>>> >>>>>>>>>
                wrote:<br>
                >>>>
                >>>>>>>>>><br>
                >>>>
                >>>>>>>>>> I have brutal
                DPC Latency on qemu, no matter if using<br>
                >>>>
                >>>>>>>>>> pci-assign<br>
                >>>>
                >>>>>>>>>> or vfio-pci or
                without any passthrought,<br>
                >>>>
                >>>>>>>>>><br>
                >>>>
                >>>>>>>>>> my DPC Latency
                is like:<br>
                >>>>
                >>>>>>>>>>
                10000,500,8000,6000,800,300,12000,9000,700,2000,9000<br>
                >>>>
                >>>>>>>>>> and on native
                windows 7 is like:<br>
                >>>>
                >>>>>>>>>>
                20,30,20,50,20,30,20,20,30<br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>> In
                Windows 10 guest I constantly have red bars around
                3000μs<br>
                >>>> >>>>>>>>>
                (microseconds), spiking sometimes up to 10000μs.<br>
                >>>> >>>>>>>>><br>
                >>>>
                >>>>>>>>>><br>
                >>>>
                >>>>>>>>>> I don't know
                how to fix it.<br>
                >>>>
                >>>>>>>>>> this matter for
                me because i are using USB Sound Card for my<br>
                >>>>
                >>>>>>>>>> VMs,<br>
                >>>>
                >>>>>>>>>> and i get sound
                drop-outs every 0-4 secounds<br>
                >>>>
                >>>>>>>>>><br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>>
                That bugs me a lot too. I also use an external USB card
                and my<br>
                >>>> >>>>>>>>>
                DAW<br>
                >>>> >>>>>>>>>
                periodically drops out :(<br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>> I
                haven't tried CPU pinning yet though. And perhaps I
                should<br>
                >>>> >>>>>>>>>
                try<br>
                >>>> >>>>>>>>>
                Windows 7.<br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>>>
                _______________________________________________<br>
                >>>> >>>>>>>>>
                vfio-users mailing list<br>
                >>>> >>>>>>>>> <a
                  moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
                >>>> >>>>>>>>> <a
                  moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
                >>>> >>>>>>>>><br>
                >>>> >>>>>>>><br>
                >>>> >>>>>>>><br>
                >>>> >>>>>>>>
                _______________________________________________<br>
                >>>> >>>>>>>>
                vfio-users mailing list<br>
                >>>> >>>>>>>> <a
                  moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
                >>>> >>>>>>>> <a
                  moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
                >>>> >>>>>>>><br>
                >>>> >>>>>>><br>
                >>>> >>>>>>><br>
                >>>> >>>>>>>
                _______________________________________________<br>
                >>>> >>>>>>> vfio-users
                mailing list<br>
                >>>> >>>>>>> <a
                  moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com"><a class="moz-txt-link-abbreviated" href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a></a><br>
                >>>> >>>>>>> <a
                  moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank"><a class="moz-txt-link-freetext" href="https://www.redhat.com/mailman/listinfo/vfio-users">https://www.redhat.com/mailman/listinfo/vfio-users</a></a><br>
                >>>> >>>>>>><br>
                >>>> >>>>><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>>
                _______________________________________________<br>
                >>>> >>> vfio-users mailing list<br>
                >>>> >>> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>>
                _______________________________________________<br>
                >>>> >>> vfio-users mailing list<br>
                >>>> >>> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>><br>
                >>>> >>>
                _______________________________________________<br>
                >>>> >>> vfio-users mailing list<br>
                >>>> >>> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> >>> <a moz-do-not-send="true"
href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> >>><br>
                >>>> >><br>
                >>>> >><br>
                >>>> >><br>
                >>>> >>
                _______________________________________________<br>
                >>>> >> vfio-users mailing list<br>
                >>>> >> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> >> <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> >><br>
                >>>> >><br>
                >>>> >><br>
                >>>> >>
                _______________________________________________<br>
                >>>> >> vfio-users mailing list<br>
                >>>> >> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> >> <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> >><br>
                >>>> ><br>
                >>>> ><br>
                >>>> >
                _______________________________________________<br>
                >>>> > vfio-users mailing list<br>
                >>>> > <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >>>> > <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >>>> ><br>
                >>><br>
                >>><br>
                >><br>
                >><br>
                >><br>
                >> _______________________________________________<br>
                >> vfio-users mailing list<br>
                >> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >> <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >><br>
                >><br>
                >><br>
                >> _______________________________________________<br>
                >> vfio-users mailing list<br>
                >> <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                >> <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                >><br>
                ><br>
                ><br>
                > _______________________________________________<br>
                > vfio-users mailing list<br>
                > <a moz-do-not-send="true"
                  href="mailto:vfio-users@redhat.com">vfio-users@redhat.com</a><br>
                > <a moz-do-not-send="true"
                  href="https://www.redhat.com/mailman/listinfo/vfio-users"
                  rel="noreferrer" target="_blank">https://www.redhat.com/mailman/listinfo/vfio-users</a><br>
                ><br>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
      </div>
    </blockquote>
    <br>
  </body>
</html>