[libvirt] [v4 0/9] Support cache tune in libvirt

Eli Qiao qiaoliyong at gmail.com
Mon Feb 13 07:42:04 UTC 2017



--  
Eli Qiao
Sent with Sparrow (http://www.sparrowmailapp.com/?sig)


On Friday, 10 February 2017 at 10:19 PM, Marcelo Tosatti wrote:

> On Fri, Feb 10, 2017 at 02:42:04PM +0800, Eli Qiao wrote:
> > Addressed comment from v3 -> v4:
> >  
> > Daniel & Marcelo:
> > * Added concurrence support
> >  
> > Addressed comment from v2 -> v3:
> >  
> > Daniel:
> > * Fixed coding style, passed `make check` and `make syntax-check`
> >  
> > * Variables renaming and move from header file to c file.
> >  
> > * For locking/mutex support, no progress.
> >  
> > There are some discussion from mailing list, but I can not find a better
> > way to add locking support without performance impact.
> >  
> > I'll explain the process and please help to advice what shoud we do.
> >  
> > VM create:
> > 1) Get the cache left value on each bank of the host. This should be
> > shared amount all VMs.
> > 2) Calculate the schemata on the bank based on all created resctrl
> > domain's schemata
> > 3) Calculate the default schemata by scaning all domain's schemata.
> > 4) Flush default schemata to /sys/fs/resctrl/schemata
> >  
> > VM destroy:
> > 1) Remove the resctrl domain of that VM
> > 2) Recalculate default schemata
> > 3) Flush default schemata to /sys/fs/resctrl/schemata
> >  
> > The key point is that all VMs shares /sys/fs/resctrl/schemata, and
> > when a VM create a resctrl domain, the schemata of that VM depends on
> > the default schemata and all other exsited schematas. So a global
> > mutex is reqired.
> >  
> > Before calculate a schemata or update default schemata, libvirt
> > should gain this global mutex.
> >  
> > I will try to think more about how to support this gracefully in next
> > patch set.
> >  
> > Marcelo:
> > * Added vcpu support for cachetune, this will allow user to define which
> > vcpu using which cache allocation bank.
> >  
> > <cachetune id='0' host_id=0 size='3072' unit='KiB' vcpus='0,1'/>
> >  
> > vcpus is a cpumap, the vcpu pids will be added to tasks file
>  
> Working as expected.
>  
> > * Added cdp compatible, user can specify l3 cache even host enable cdp.
> > See patch 8.
> > On a cdp enabled host, specify l3code/l3data by
> >  
> > <cachetune id='0' host_id='0' type='l3' size='3072' unit='KiB'/>
> >  
> > This will create a schemata like:
> > L3data:0=0xff00;...
> > L3code:0=0xff00;...
> >  
> > * Would you please help to test if the functions work.
>  
> Setting up CDP machine.
>  
> Unrelated:
>  
> Found a bug:
>  
> The code should scan for all directories in resctrlfs,
> and then find free CBM space from that:
>  
>  
> free_cbm_space = ~(resctrldir1.CBM_bits &
> resctrldir2.CBM_bits &
> ...
> resctrldirN.CBM_bits)
>  
> For all resctrlfs directories.
>  
> The bug is as follows:
>  
> Create a directory in resctrlfs by hand:
>  
> # mkdir newres

Libvirt will not aware this after it starts running, so we should not allow operate /sys/fs/.

we will scan for all directors while the libvirt daemon begin running, and libvirt will remove exist directories if no tasks inside of it.  

  
> # cd newres
> # echo "L3:0=3;1=f0000" > schemata
> # virsh start guest
> # cd ../b4c270b5-e0f9-4106-a446-69032872ed7e
> # cat schemata
> L3:0=3;1=f0000
>  
> That is, it is using the same CBM space as the "newres"
> reservation.
>  
>  


As user create a new directory after libvirt running, it don’t notice newly created directory under /sys/fs/resctrl.

That will lead mess, this should be forbidden.  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20170213/45b3865c/attachment-0001.htm>


More information about the libvir-list mailing list