l3 cache and cpu pinning

Daniel P. Berrangé berrange at redhat.com
Wed Apr 21 12:13:39 UTC 2021


On Wed, Apr 21, 2021 at 12:09:42PM +0100, Daniel P. Berrangé wrote:
> On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> > Hi,
> > 
> > I have a question regarding enabling l3 cache emulation on Domains. Can
> > this also be enabled without cpu-pinning, or does it need cpu pinning to
> > emulate the l3 caches according to the cpus where the guest is pinned to?
> 
> I presume you're referring to
> 
>   <cpu>
>     <cache level='3' mode='emulate|passthrough|none'/>
>   </cpu>
> 
> There is no hard restriction placed on usage of these modes by QEMU.
> 
> Conceptually though, you only want to use "passthrough" mode if you
> have configured the sockets/cores/threads topology to match the host
> CPUs. In turn you only ever want to set sockets/cores/threads to
> match the host if you have done CPU pinning such that the topology
> actually matches the host CPUs that have been pinned to.
> 
> As a rule of thumb
> 
>  - If letting CPUs float
>  
>      -> Always uses sockets=1, cores=num-vCPUs, threads=1
>      -> cache==emulate
>      -> Always use 1 guest NUMA node (ie the default)
> 
> 
>  - If strictly pinning CPUs 1:1
> 
>      -> Use sockets=N, cores=M, threads=0 to match the topology
>         of the CPUs that have been pinned to

Opps, I meant  threads=P there, not 0 - ie match host threads.

With recentish libvirt+QEMU there is also a "dies=NNN" parameter for
topology which may be relevant for some host CPUs (very recent Intel
ones)

>      -> cache==passthrough
>      -> Configure virtual NUMA nodes if the CPU pinning or guest
>         RAM needs cross host NUMA nodes.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




More information about the libvirt-users mailing list