[PATCH RFC 00/10] qemu: Enable SCHED_CORE for domains and helper processes

Dario Faggioli dfaggioli at suse.com
Thu May 26 12:01:30 UTC 2022


On Mon, 2022-05-23 at 17:13 +0100, Daniel P. Berrangé wrote:
> On Mon, May 09, 2022 at 05:02:07PM +0200, Michal Privoznik wrote:
> In terms of defaults I'd very much like us to default to enabling
> core scheduling, so that we have a secure deployment out of the box.
> The only caveat is that this does have the potential to be
> interpreted
> as a regression for existing deployments in some cases. Perhaps we
> should make it a meson option for distros to decide whether to ship
> with it turned on out of the box or not ?
>
I think, as Michal said, with the qemu.conf knob from patch 8, we will
already have that. I.e., distros will ship a qemu.conf with sched_core
equal to 1 or 0, depending on what they want as a default behavior.

> I don't think we need core scheduling to be a VM XML config option,
> because security is really a host level matter IMHO, such that it
> does't make sense to have both secure & insecure VMs co-located.
> 
Mmm... I can't say that I have any concrete example, but I guess I can
picture a situation where someone has "sensitive" VMs, which he/she
would want to make sure they're protected from the possibility that
other VMs steal their secrets, and "less sensitive" ones, for which
it's not a concern if they share cores and (potentially) steal secrets
among each others (as far as none of those can steal from any
"sensitive" one, but this does not happen, if we set core scheduling
for the latter).

Another scenario would be if core-scheduling is (ab)used for limiting
the interference. Like some sort of flexible and dynamic form of vcpu-
pinning. That is, if I set core-scheduling for VM1, I'm sure that VM1's
vcpus will never share cores with any other VMs. Which is good for
performance and determinism, because it means that it can't happen that
vcpu1 of VM3 runs on the same core of vcpu0 of VM1 and, when VM3-vcpu1
is busy, VM1-vcpu0 slows down as well. Imagine that VM1 and VM3 are
owned by different customers, core-scheduling would allow me to make
sure that whatever customer A is doing in VM3, it can't slow down
customer B, who owns VM1, without having to resort to do vcpu-pinning,
which is unflexible and. And again, maybe we do want this "dynamic
interference shielding" property for some VMs, but not for all... E.g.,
we can offer it as an higher SLA, and ask more money for a VM that has
it.

Thoughts?

In any case, even if we decide that we do want per-VM core-scheduling,
e.g., for the above mentioned reasons, I guess it can come later, as a
further improvement (and I'd be happy to help making it happen).

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part
URL: <http://listman.redhat.com/archives/libvir-list/attachments/20220526/de15876c/attachment.sig>


More information about the libvir-list mailing list