[libvirt] Question / Bug: "IO mon_iothread" not affected by "<iothreadsched scheduler=.."
Daniel P. Berrangé
berrange at redhat.com
Fri Jan 10 17:41:39 UTC 2020
On Mon, Dec 30, 2019 at 12:26:40AM +0200, gima+libvir-list at iki.fi wrote:
> # Question / Bug
> For "<iothreads>1</iothreads>", QEMU creates two threads by name of
> "IO mon_iothread"and
> "IO iothread1"
That isn't correct.
The "IO mon_iothread" always exists with new QEMU, regardless of
whether any <iothreads> element is present in the guest config.
This is a secret internal QEMU thread used for the monitor and
has no relation to I/O threads used for guest devices.
> Both are affected by "<iothreadpin iothread='1' cpuset='5'/>" (pinned to
> specified CPU), but only "IO iothread1" is affected by "<iothreadsched
> iothread='1' cpuset='5'/>".
I don't see that behaviour. Only the explicitly request device I/O
thread is affected by <iothreadpin> and <iothreadsched>
> I believe this is to be a bug, whereas both threads should be affected, and
> be set to be ruled by the specified iothread scheduler. Am I correct and is
> this a bug, or am I missing something?
I don't see any bug here.
As a test I have a guest with 2 CPUs and 1 I/O thread:
<vcpu placement='static' cpuset='0-1'>2</vcpu>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<iothreadpin iothread='1' cpuset='3'/>
<iothreadsched iothreads='1' scheduler='batch'/>
The overall QEMU emulator process is pinned to CPUs 0-1 by
default. The 2 VCPUs then then futher pinned to CPUs 0 and
1 respectively. The only I/O thread is pinned to CPU 3
and given the batch scheduler
I can now validate what is running:
# cd /proc/$QEMU-PID/task
# grep -E 'Name|Cpus_allowed_list' */status
4031532/status:Name: IO iothread1
4031534/status:Name: IO mon_iothread
4031535/status:Name: CPU 0/TCG
4031536/status:Name: CPU 1/TCG
This shows that only the requested I/O thread had its
CPU affinity changed - "IO mon_iothread" is untouched.
Similarly we can show that the batch schedular only
applied to the requested I/o thread
# for i in * ; do chrt -p $i ; done
pid 4031509's current scheduling policy: SCHED_OTHER
pid 4031509's current scheduling priority: 0
pid 4031528's current scheduling policy: SCHED_OTHER
pid 4031528's current scheduling priority: 0
pid 4031532's current scheduling policy: SCHED_BATCH
pid 4031532's current scheduling priority: 0
pid 4031534's current scheduling policy: SCHED_OTHER
pid 4031534's current scheduling priority: 0
pid 4031535's current scheduling policy: SCHED_OTHER
pid 4031535's current scheduling priority: 0
pid 4031536's current scheduling policy: SCHED_OTHER
pid 4031536's current scheduling priority: 0
pid 4031540's current scheduling policy: SCHED_OTHER
pid 4031540's current scheduling priority: 0
> # What does this matter / How does this manifest a problem?
> This manifests in case there is 1 iothread, and both iothread and emulator
> are pinned to the same cpu and set to use fifo or rr as their scheduler. In
> this configuration, QEMU does not start correctly and "stalls" until I
> change the scheduler of "IO mon_iothread" to rr or fifo (respectively).
All the QEMU emulator threads run wiht "other" policy by default.
If you intentionally place the IO thread on the same CPUs as these
threads and give it "rr" or "fifo" policy it will obviously starve
those emulator threads for running time. This applies to all the
emulator threads, not only "IO mon_iothread". You can use
to control the schedular policy for the emulator threads. Or
you can place them on a different CPU so that they don't
compete for resources.
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
More information about the libvir-list