[PATCH RFC 00/10] qemu: Enable SCHED_CORE for domains and helper processes

Michal Prívozník mprivozn at redhat.com
Tue May 31 09:47:55 UTC 2022


On 5/26/22 14:01, Dario Faggioli wrote:
> On Mon, 2022-05-23 at 17:13 +0100, Daniel P. Berrangé wrote:
>> On Mon, May 09, 2022 at 05:02:07PM +0200, Michal Privoznik wrote:
>> In terms of defaults I'd very much like us to default to enabling
>> core scheduling, so that we have a secure deployment out of the box.
>> The only caveat is that this does have the potential to be
>> interpreted
>> as a regression for existing deployments in some cases. Perhaps we
>> should make it a meson option for distros to decide whether to ship
>> with it turned on out of the box or not ?
>>
> I think, as Michal said, with the qemu.conf knob from patch 8, we will
> already have that. I.e., distros will ship a qemu.conf with sched_core
> equal to 1 or 0, depending on what they want as a default behavior.
> 
>> I don't think we need core scheduling to be a VM XML config option,
>> because security is really a host level matter IMHO, such that it
>> does't make sense to have both secure & insecure VMs co-located.
>>
> Mmm... I can't say that I have any concrete example, but I guess I can
> picture a situation where someone has "sensitive" VMs, which he/she
> would want to make sure they're protected from the possibility that
> other VMs steal their secrets, and "less sensitive" ones, for which
> it's not a concern if they share cores and (potentially) steal secrets
> among each others (as far as none of those can steal from any
> "sensitive" one, but this does not happen, if we set core scheduling
> for the latter).
> 
> Another scenario would be if core-scheduling is (ab)used for limiting
> the interference. Like some sort of flexible and dynamic form of vcpu-
> pinning. That is, if I set core-scheduling for VM1, I'm sure that VM1's
> vcpus will never share cores with any other VMs. Which is good for
> performance and determinism, because it means that it can't happen that
> vcpu1 of VM3 runs on the same core of vcpu0 of VM1 and, when VM3-vcpu1
> is busy, VM1-vcpu0 slows down as well. Imagine that VM1 and VM3 are
> owned by different customers, core-scheduling would allow me to make
> sure that whatever customer A is doing in VM3, it can't slow down
> customer B, who owns VM1, without having to resort to do vcpu-pinning,
> which is unflexible and. And again, maybe we do want this "dynamic
> interference shielding" property for some VMs, but not for all... E.g.,
> we can offer it as an higher SLA, and ask more money for a VM that has
> it.
> 
> Thoughts?

I'd expect the host scheduler to work around this problem, e.g. it could
run vCPUs of different VMs on different cores. Of course, this assumes
that they are allowed to run on different cores (i.e. they are not
pinned onto the same physical CPU). And if they are then that's
obviously a misconfig on admin side.

Michal



More information about the libvir-list mailing list