[libvirt-users] virsh -c lxc:/// setvcpus and <vcpu> configuration fails

Oliver Dzombic info at layer7.net
Mon Sep 16 12:06:52 UTC 2019


Hi Martin,

thank you very much for your response!

Answers inline :)


>> Although if you ran some perf benchmark it should just cap at 2 cpus.

A cpuset.cpus from /sys/fs/cgroup.... will show with libvirtd: 0-23

So with 0-23 all 24 Cores will be assigned, AND useable.

I installed stress and run it with --cpu 8 or 16 or what ever. The
container receives, just according to the cgroup, all 24 cores without
any limitation.

/proc/cpuinfo
lscpu

it does not matter what you ask, all cpu's are given. Nothing is limited.

Also you will see the cpu load of the system, so also the accounting of
CPU is given to the container ( the container will see, that there is
load on the cpu ).


Compared to this,

a cpuset.cpus in lxc version 3 will show: 12,17,21,28 ( proxmox )

So 4 cores will be displayed, and as long as you dont put load on them,
you will see ~100% idle on all 4 cores.


Same goes for the implementation with lxd ( tested 3.x ).


And yes, sorry, i just saw later, that virDomainSetVcpus is simply not
supporting lxc. My fault, didnt read the documentation fully at that point.

------

So the general question is what is the expected result, when using
libvirt ( and i would really love to use libvirt, as we already use it
with kvm ) and would also like to use it with docker.

So the goal would be, to have something like what lxd / lxc is giving.

A container, that has N virtual CPUs, where only N virtual CPU's are
seen by the container through /proc/cpuinfo with its own cpu accounting.

So the question is:

Can libvirt set inside cgroups cpuset and cpuacct (correctly/at all) ?


The point here is the following:

If the software will check the container and will see on one hand, that
there are 24 CPU's available, but on the other hand only what ever else
is useable, things will go down a very dark road for us. And i guess not
only for us.

So its mandatory that cgroups are set correctly, at least in terms of
cpuset.

Can libvirt do this ? Or, as it seems now, can libvirt only manage KVM's
cgroups settings correctly ?

We just search for a better implementation for the workload. So
containers would be much better because of less overhead.

Again, thank you for your time !


-- 
Mit freundlichen Gruessen / Best regards

Oliver

Am 16.09.19 um 10:58 schrieb Martin Kletzander:
> On Sun, Sep 15, 2019 at 12:21:08PM +0200, info at layer7.net wrote:
>> Hi folks!
>>
>> i created a server with this XML file:
>>
>> <domain type='lxc'>
>>  <name>lxctest1</name>
>>  <uuid>227bd347-dd1d-4bfd-81e1-01052e91ffe2</uuid>
>>  <metadata>
>>    <libosinfo:libosinfo
>> xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
>>      <libosinfo:os id="http://centos.org/centos/6.9"/>
>>    </libosinfo:libosinfo>
>>  </metadata>
>>  <memory unit='KiB'>1024000</memory>
>>  <currentMemory unit='KiB'>1024000</currentMemory>
>> <vcpu>2</vcpu>
>>  <numatune>
>>    <memory mode='strict' placement='auto'/>
>>  </numatune>
>>  <resource>
>>    <partition>/machine</partition>
>>  </resource>
>>  <os>
>>    <type arch='x86_64'>exe</type>
>>    <init>/sbin/init</init>
>>  </os>
>>  <idmap>
>>    <uid start='0' target='200000' count='65535'/>
>>    <gid start='0' target='200000' count='65535'/>
>>  </idmap>
>>  <features>
>>    <privnet/>
>>  </features>
>>  <clock offset='utc'/>
>>  <on_poweroff>destroy</on_poweroff>
>>  <on_reboot>restart</on_reboot>
>>  <on_crash>destroy</on_crash>
>>  <devices>
>>    <emulator>/usr/libexec/libvirt_lxc</emulator>
>>    <filesystem type='mount' accessmode='mapped'>
>>      <source dir='/mnt'/>
>>      <target dir='/'/>
>>    </filesystem>
>>    <interface type='network'>
>>      <mac address='00:16:3e:3e:3e:bb'/>
>>      <source network='Public Network'/>
>>    </interface>
>>    <console type='pty'>
>>      <target type='lxc' port='0'/>
>>    </console>
>>  </devices>
>> </domain>
>>
>>
>> I would expect it to have 2 cpu cores and 1 GB RAM.
>>
>>
>> The RAM config works.
>> The CPU config does not:
>>
> 
> You probably checked /proc/meminfo.  That is provided by libvirt using fuse
> filesystem, but at least it is guaranteed thanks to cgroups.  We do not
> (and I
> don't think we even can, at least reliably) do that with cpuinfo.
> 
> [...]
> 
>> It gives me all CPU's from the host.
>>
> 
> Although if you ran some perf benchmark it should just cap at 2 cpus.
> 
>> I also tried it with
>>
>> <cpu>
>>  <topology sockets='1' cores='2' threads='1'/>
>> </cpu>
>>
> 
> We should not allow this, IMO.  The reason is that we cannot guarantee
> or even
> emulate this (or even the number of CPUs for that matter).  That's not how
> containers work.  We can provide /proc/cpuinfo through a fuse
> filesystem, but if
> the code actually asks the cpu directly there is no layer in which to
> emulate
> the returned information.
> 
>> That didnt help too.
>>
>> I tried to modify the vcpus through virsh:
>>
>>
>>
>> #virsh -c lxc:/// setvcpus lxctest1 2
>>
>> error: this function is not supported by the connection driver:
>> virDomainSetVcpus
>>
> 
> This should not work for LXC, but it does not make sence because if you
> look at
> the XML we allow `<vcpus>2</vcpus>`.
> 
>> Which didnt work too.
>>
>>
>> This happens on:
>>
> 
> Unfortunately anywhere, for the reasons said above.  Ideally it should
> not be
> able to specify <vcpus/>, but rather just <cputune/>, but I don't think
> we can
> change that semantics now that we supported the former for quite some time.




More information about the libvirt-users mailing list