[kubevirt-dev] RFC: New APIs for delegation of privileged operations

Fox, Kevin M Kevin.Fox at pnnl.gov
Tue Nov 29 18:23:07 UTC 2022


Would a regular libvirt installation benefit from having libvirtd run untrusted too (I think so)? especially if its on the network? so instead of making it plugable, maybe the architecture should be updated and libvirtd never does trusted operations and maybe the solution can be shared between libvirt without kubevirt and libvirt+kubevirt?

Thanks,
Kevin

________________________________________
From: kubevirt-dev at googlegroups.com <kubevirt-dev at googlegroups.com> on behalf of Andrea Bolognani <abologna at redhat.com>
Sent: Tuesday, November 29, 2022 9:05 AM
To: libvirt-list at redhat.com
Cc: kubevirt-dev at googlegroups.com
Subject: [kubevirt-dev] RFC: New APIs for delegation of privileged operations

Check twice before you click! This email originated from outside PNNL.


Hi,

this is a proposal for introducing a new family of APIs in libvirt,
with the goal of improving integration with management applications.

KubeVirt is intended to be the primary consumer of these APIs.


Background
----------

KubeVirt makes it possible to run VMs on a Kubernetes cluster, side
by side with containers.

It does so by running QEMU and libvirtd themselves inside a
container. The architecture is explained in more detail at

  https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubevirt.io%2Fuser-guide%2Farchitecture%2F&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=cmp8sbq4Sa7phFefOqp5%2F85%2FhCzUYdAyanL8RjxWJ9Y%3D&reserved=0

but for the purpose of this discussion we only need to keep in mind
two components:

  * virt-launcher

    - runs in the same container as QEMU and libvirtd
    - one instance per VM

  * virt-handler

    - runs in a separate container
    - one instance per node

Conceptually, these two components roughly map to QEMU processes and
libvirtd respectively.

>From a security perspective, there is a strong push in Kubernetes to
run workloads under unprivileged user accounts and without additional
capabilities. Again, this is similar to how libvirtd itself runs as
root but the QEMU processes it starts are under the unprivileged
"qemu" account.

KubeVirt has been working towards the goal of running VMs as
completely unprivileged workloads and made excellent progress so far.

Some of the operations needed for running a VM, however, inherently
require elevated privilege. In KubeVirt, the conundrum is solved by
having virt-handler (a privileged component) take care of those
operations, making it possible for virt-launcher (as well as QEMU and
libvirtd) to run in an unprivileged context.


Examples
--------

Here are a few examples of how KubeVirt has been able to reduce the
privilege required by virt-launcher by selectively handing over
responsibilities to virt-handler:

  * Remove SYS_RESOURCE capability from launcher pod
    https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubevirt%2Fkubevirt%2Fpull%2F2584&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=pqTsZKmqGDdR1OQ6G9VRL%2Fanukq3ljRBOJd%2BOtZbS50%3D&reserved=0

  * Drop SYS_RESOURCE capability
    https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubevirt%2Fkubevirt%2Fpull%2F5558&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9Eq%2FBWToSbRnT5p5EG3WH223PYE96%2BnBifMjQewaVu8%3D&reserved=0

  * Housekeeping cgroup
    https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubevirt%2Fkubevirt%2Fpull%2F8233&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=FCSC0TBr1JLqLk6X%2Fx%2BIoSibOdfOlrjMBFc%2FU1vK53k%3D&reserved=0

  * Real time VMs fail to change vCPU scheduler and priority in
non-root deployments
    https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubevirt%2Fkubevirt%2Fpull%2F8750&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iYP1dFwF6I8cuwdCPkGXBwEmxhZtMj597Jj0OWuCreE%3D&reserved=0

  * virt-launcher: Drop SYS_PTRACE capability
    https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubevirt%2Fkubevirt%2Fpull%2F8842&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Z6%2FCFfcPtOWEmvrQ86%2FJ4PJ2PuKjDdiVjJI%2BlI1ZjqI%3D&reserved=0

The pattern we can see is that, initially, libvirt just assumes that
it can perform a certain privileged operation. This fails in the
context of KubeVirt, where libvirtd runs with significantly reduced
privileges. As a consequence, libvirt is patched to be more resilient
to such lack of privilege: for example, instead of attempting to
create a file and erroring out due to lack of permissions, it will
instead first check whether the file already exists and, if it does,
assume that it has been prepared ahead of time by an external entity.


Limitations
-----------

This approach works fine, but only for the privileged operations that
would be performed by libvirt before the VM starts running.

Looking at the "housekeeping cgroup" PR in particular, we notice that
the VM is initially created in paused state: this is necessary in
order to create a point in time in which all the VM threads already
exist but, crucially, none of the vCPUs have stated running yet. This
is the only opportunity to move threads across cgroups without
invalidating the expectations of a real time workload.

When it comes to live migration, however, there is no way to create
similar conditions, since the VM is running on the destination host
right out of the gate. As a consequence, live migration has to be
blocked when the housekeeping cgroup is in use, which is an
unfortunate limitation.

Moreover, there's an overall sense of fragility surrounding these
interactions: both KubeVirt and, to some extent, libvirt need to be
acutely aware of what the other component is going to do, but there
is never an explicit handover and the whole thing only works if you
just so happen to do everything with the exact right order and
timing.


Proposal
--------

In order to address the issues outlined above, I propose that we
introduce a new set of APIs in libvirt.

These APIs would expose some of the inner workings of libvirt, and
as such would come with *massively reduced* stability guarantees
compared to the rest of our public API.

The idea is that applications such as KubeVirt, which track libvirt
fairly closely and stay pinned to specific versions, would be able to
adapt to changes in these APIs relatively painlessly. More
traditional management applications such as virt-manager would simply
not opt into using the new APIs and maintain the status quo.

Using memlock as an example, the new API could look like

    typedef int (*virInternalSetMaxMemLockHandler)(pid_t pid,
                                                   unsigned long long bytes);

    int virInternalSetProcessSetMaxMemLockHandler(virConnectPtr conn,

virInternalSetMaxMemLockHandler handler);

The application-provided handler would be responsible for performing
the privileged operation (in this case raising the memlock limit for
a process). For KubeVirt, virt-launcher would have to pass the baton
to virt-handler.

If such an handler is installed, libvirt would invoke it (and likely
go through some sanity checks afterwards); if not, it would attempt
to perform the privileged operation itself, as it does today.

This would make the interaction between libvirt and the management
application explicit rather than implicit. Not having to stick to our
usual API stability guarantees would make it possible to be more
liberal in exposing the internals of libvirt as interaction points.


Scope
-----

I think we should initially limit the new APIs to the scenarios that
have already been identified, then gradually expand the scope as
needed. In other words, we shouldn't comb through the codebase
looking for potential adopters.

Since the intended consumers of these APIs are those that can
adopt a new libvirt release fairly quickly, this shouldn't be a
problem.

Once the pattern has been established, we might consider introducing
support for it at the same time as a new feature that might benefit
from it is added.


Caveats
-------

libvirt is all about stable API, so introducing an API that is
unstable *by design* is completely uncharted territory.

To ensure that the new APIs are 100% opt-in, we could define them in
a separate <libvirt/libvirt-internal.h> header. Furthermore, we could
have a separate libvirt-internal.so shared library for the symbols
and a corresponding libvirt-internal.pc pkg-config file. We could
even go as far as requiring a preprocessor symbol such as

  VIR_INTERNAL_UNSTABLE_API_OPT_IN

to be defined before the entry points are visible to the compiler.
Whatever the mechanism, we would need to make sure that it's usable
from language bindings as well.

Internal APIs are amendable to not only come and go, but also change
semantics between versions. We should make sure that such changes are
clearly exposed to the user, for example by requiring them to pass a
version number to the function and erroring out immediately if the
value doesn't match our expectations. KubeVirt has a massive suite of
functional tests, so this kind of change would immediately be spotted
when a new version of libvirt is imported, with no risk of an
incompatibility lingering in the codebase until it affects users.


Disclaimer
----------

This proposal is intentionally vague on several of the details.
Before attempting to nail those down, I want to gather feedback on
the high-level idea, both from the libvirt and KubeVirt side.


Credits
-------

Thanks to Michal and Martin for helping shape and polish the idea
from its initial rough state.

--
Andrea Bolognani / Red Hat / Virtualization

--
You received this message because you are subscribed to the Google Groups "kubevirt-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubevirt-dev+unsubscribe at googlegroups.com.
To view this discussion on the web visit https://gcc02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgroups.google.com%2Fd%2Fmsgid%2Fkubevirt-dev%2FCABJz62MwTNfJfq%252BM%253D-c%252B2K3bbKYNu0JByAHHbAgoNWakj%253D6iUQ%2540mail.gmail.com&data=05%7C01%7CKevin.Fox%40pnnl.gov%7Ca140dce693b146a3c4b208dad22bf3bb%7Cd6faa5f90ae240338c0130048a38deeb%7C0%7C0%7C638053383460396568%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=lXYgCvbp0vbxtgywD54%2FM87r9JpjHi72GSQJEgrkdkM%3D&reserved=0.



More information about the libvir-list mailing list