l3 cache and cpu pinning

Daniel P. Berrangé berrange at redhat.com
Fri Apr 23 14:18:19 UTC 2021


On Thu, Apr 22, 2021 at 01:34:18PM +0200, Roman Mohr wrote:
> On Thu, Apr 22, 2021 at 1:24 PM Roman Mohr <rmohr at redhat.com> wrote:
> 
> >
> >
> > On Thu, Apr 22, 2021 at 1:19 PM Roman Mohr <rmohr at redhat.com> wrote:
> >
> >>
> >>
> >> On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé <berrange at redhat.com>
> >> wrote:
> >>
> >>> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> >>> > Hi,
> >>> >
> >>> > I have a question regarding enabling l3 cache emulation on Domains. Can
> >>> > this also be enabled without cpu-pinning, or does it need cpu pinning
> >>> to
> >>> > emulate the l3 caches according to the cpus where the guest is pinned
> >>> to?
> >>>
> >>> I presume you're referring to
> >>>
> >>>   <cpu>
> >>>     <cache level='3' mode='emulate|passthrough|none'/>
> >>>   </cpu>
> >>>
> >>> There is no hard restriction placed on usage of these modes by QEMU.
> >>>
> >>> Conceptually though, you only want to use "passthrough" mode if you
> >>> have configured the sockets/cores/threads topology to match the host
> >>> CPUs. In turn you only ever want to set sockets/cores/threads to
> >>> match the host if you have done CPU pinning such that the topology
> >>> actually matches the host CPUs that have been pinned to.
> >>>
> >>> As a rule of thumb
> >>>
> >>>  - If letting CPUs float
> >>>
> >>>      -> Always uses sockets=1, cores=num-vCPUs, threads=1
> >>>      -> cache==emulate
> >>>      -> Always use 1 guest NUMA node (ie the default)
> >>>
> >>>
> >> Is `emulate` also the default in libvirt? If not, would you see any
> >> reason, e.g. thinking about migrations, to not set it always if no cpu
> >> pinning is done?
> >>
> >
> > To answer my own question: I guess something like [1] is a good reason to
> > not enable l3-cache by default, since it seems to have an impact on VM
> > density on nodes.
> >
> 
> Hm, seems like this change got only merged for older machine types. So
> according to the libvirt doc (not setting it means hypervisor default), it
> is probably set to emulation?

Actually that patch didn't get merged at all afaict.

The support for l3-cache was introduced in QEMU 2.8.0, defaulting to
enabled. The code you see that disables it in older machine types dates
from this time, because we had to preserve ABI for machine tpyes < 2.8.0


So in practice today you'll be getting "emulate" mode already with any
non-ancient QEMU.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvirt-users mailing list