[vfio-users] Eliminate Mini Stutter in "Witcher 3: The Wild Hunt" on top of KVM/VFIO

Alex Williamson alex.l.williamson at gmail.com
Wed Mar 16 05:33:00 UTC 2016


On Tue, Mar 15, 2016 at 10:44 PM, Okky Hendriansyah <okky.htf at gmail.com>
wrote:

> On Wed, Mar 16, 2016 at 11:37 AM, Alex Williamson <
> alex.l.williamson at gmail.com> wrote:
>
>> ..
>> The Fedora kernel already sets all of these, however regarding 3) the
>> advice from nbhs is to use hugetlbfs and not rely on transparent
>> hugepages.  Transparent hugepages is not compatible with device assignment
>> since pages get pinned as the VM is created, there's no opportunity for
>> transparent hugepages to take effect.  Therefore I don't think your madvise
>> change is doing anything.
>>
>
> Hi Alex,
>
> Actually I enable the madvise since from the description it could create a
> more efficient memory usage when using transparent hugepages. But yeah, you
> were right I just noticed that nbhs only adviced to use hugetlbfs (which I
> already had in my config) thus I'm not using transparent hugepages at all
> for the VM.
>
> Can you enlighten me on how those 2 kernel configurations can eliminate
> the mini stutters? The thing is that the mini stutters only happens on
> Witcher 3, and after applying those config the game is very fluid.
>

You didn't say what the original settings were that you were coming from.
For preempt, I'd guess you were probably using PREEMPT_NONE.  The config
option descriptions are actually pretty good here:

  │ CONFIG_PREEMPT_NONE:
  │ This is the traditional Linux preemption model, geared towards
  │ throughput. It will still provide good latencies most of the
  │ time, but there are no guarantees and occasional longer delays
  │ are possible.
  │ Select this option if you are building a kernel for a server or
  │ scientific/computation system, or if you want to maximize the
  │ raw processing power of the kernel, irrespective of scheduling
  │ latencies.

  │ CONFIG_PREEMPT_VOLUNTARY:
  │ This option reduces the latency of the kernel by adding more
  │ "explicit preemption points" to the kernel code. These new
  │ preemption points have been selected to reduce the maximum
  │ latency of rescheduling, providing faster application reactions,
  │ at the cost of slightly lower throughput.
  │ This allows reaction to interactive events by allowing a
  │ low priority process to voluntarily preempt itself even if it
  │ is in kernel mode executing a system call. This allows
  │ applications to run more 'smoothly' even when the system is
  │ under load.
  │ Select this if you are building a kernel for a desktop system.

The timer frequency is going to affect the granularity of how often we
switch between tasks.  Higher frequencies are better for interactivity,
lower frequencies incur less overhead for longer running tasks.  These sort
of coincide with the preempt descriptions, a lower timer frequency allows
tasks to run uninterrupted for a longer time, improving throughput, a
higher timer frequency allow better interactivity since there are more
scheduling points, at the cost of some throughput.

If you really want to isolate the VM from the host for scheduling purposes,
you can find other references in the old archlinux thread to using isolcpus
and nohz_full.  The isolcpus option removes cpus from the general
scheduler, so those cpus only run the tasks manually schedule for them and
nohz_full stops the timer tick on the specified cpus whenever possible.
The idea is to isolate a set of cpus and use them exclusively for running
the vcpus, providing maximum throughput to the VM, while leaving some cpus
for the host to handle interactivity and background tasks.  The trouble
here is that it's great if you want to permanently divide a system between
VMs or potentially have the host tasks run as subordinate tasks to the VM,
but it makes it difficult for the host to have full access to all the
processor resources when the VM is idle or unused.  cgroups (cpusets) would
probably provide the capability more dynamically, but I don't know that
anyone has really documented it for this use case.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160315/08ccefc2/attachment.htm>


More information about the vfio-users mailing list