[vfio-users] Looking for performance tips, qemu machine type and pci/pcie-root questionning?
ivan.volosyuk at gmail.com
Sun Apr 17 07:02:27 UTC 2016
I also use my Linux for development, but I found out that half of my CPU
cores is sufficient for my tasks, that's why I dedicated half to Windows
This is my untrusted sandbox for gamings only. I don't have my Google
account at it, so to use Google Music I run it on host, that's why I need
pulseaudio for mixing.
I also have a hacked up version for unloading CPUs using /cpusets, so that
I can still recompile my system while playing games (in ideal world). I
don't use libvirt, instead I run qemu directly:
with -mem-path /home/vm-images/hugepages (and some hacks to free up
required ammount from host memory)
All tasks moved away from dedicated cores using cpusets, than qemu tasks
assigned to that cores, than I assign qemu vcores using:
echo "info cpus" | nc -q 1 localhost 1234| head -n 6 | tail -n 4 |
cut -d= -f3 | tr -d '\r' | while read t
echo taskset -pc $i $t
taskset -pc $i $t
Not sure how people switch keyboard input between host and VM, I use
synergy hotkeys for that:
mousebutton(4) = mousebutton(4)
mousebutton(5) = mousebutton(5)
mousebutton(6) = mousebutton(4)
mousebutton(7) = mousebutton(5)
mousebutton(8) = mousebutton(4)
mousebutton(9) = mousebutton(5)
mousebutton(10) = mousebutton(4)
mousebutton(11) = mousebutton(5)
# nothing is more permanent than temporary solution
keystroke(F11) = switchToScreen(vm)
keystroke(F12) = switchToScreen(pc)
I don't want a native install anymore, because qcow2 snaphoting feature is
really awesome. I can sacrifice some HDD performance for qcow2 support
especially if I gain by using Linux bcache which benefits both Linux and
On Sun, Apr 17, 2016 at 4:30 AM thibaut noah <thibaut.noah at gmail.com> wrote:
> Personnaly i use the sound of the graphic card directly, not setting
> anything special.
> For my usb headset i bought a usb controller card which i passthrough to
> the vm, this way i can use windows driver, working just fine with
> everything :D
> I cannot use isolated cpu (i won't) since i also use my desktop for linux
> things, like programming for example, and i want to be able to do things
> with my host when the guest is shutdown, otherwise i would have go with a
> native install ;)
> I wanted to give a try to cset, to temporarly reserved a core for the host
> and move all his process there but that didn't workout well with libvirt,
> What is your use of pulseaudio server?
> 2016-04-15 16:34 GMT+02:00 Ivan Volosyuk <ivan.volosyuk at gmail.com>:
>> Optimizations is my favorite topic.
>> I use qcow on bcache (HDD with SSD cache).
>> - In windows I disabled: disk indexing, boot optimizations, antivirus
>> periodic scans (they all bad for bcache). For pure SSD this might be ok.
>> Other windows optimizations would be using MSI for interrupt handling,
>> see Alex blog.
>> I use virtio SCSI device for my root device, not sure if this the best
>> configuration for qcow2. For raw partition there should be completely
>> different flags.
>> STORAGE+=" -drive
>> STORAGE+=" -device scsi-hd,bus=scsi0.0,drive=disk"
>> Even with MSI I still have issues with audio crackles, this is the last
>> optimizations I tried to reduce them. Not sure if this counts as
>> SND=" -soundhw ac97 -rtc base=utc,driftfix=slew -no-hpet -global
>> SND_DRIVER_OPTS="QEMU_AUDIO_DRV=pa QEMU_PA_SAMPLES=1024"
>> Use isolated CPUs. This makes this unavailable to the rest of the system,
>> but we are talking about gaming machine, right? This should unload the CPUs
>> from normal kernel and userspace tasks.
>> kernel option: isolcpus=4-7
>> Use realtime priority on pulseaudio server.
>> TODO: make sure it uses shared memory as communication channel with qemu.
>> On Fri, Apr 15, 2016 at 4:01 PM thibaut noah <thibaut.noah at gmail.com>
>>> 2016-04-15 6:13 GMT+02:00 Okky Hendriansyah <okky.htf at gmail.com>:
>>>> I think Alex had mentioned about this, and if I recall correctly using
>>>> pc-i440fx is preferrable since it is simpler and going to pc-q35 won't have
>>>> any performance benefit. Currently I only use pc-q35 specifically just for
>>>> my Hackintosh guest. I never done any benchmark between these two types
>>>> recently though, so the result might change.
>>> I just read alex's mail below yours, indeed you are right nothing
>>> changes, so much fuss for nothing :/
>>> Thanks for the clarification alex btw.
>>>> According to one of the reddit users at /r/vfio , avoiding to use
>>>> hv_vapic and hv_syncic in newer Intel CPUs starting Ivy Bridge-E onwards
>>>> which has built-in Intel APICv will generally improve performance by
>>>> reducing VM exits. Currently I'm using these options:
>>> I read that post too, though i don't have enough knowledge about
>>> virtualization to really understand what this guy is talking about, i
>>> bumped into this :
>>> but no.
>>> Will try your additionnal options while waiting to have the latest
>>> libvirt version.
>>>> Those two kernel configurations (1000 MHz and Voluntary) made my
>>>> stuttery Garret to a butter smooth Garret ;). Other plus point is that ZFS,
>>>> which I use extensively for the OS guest images prefers Voluntary also. 
>>> I personnaly have my vm image in a qcwo2 container on a ssd, it would be
>>> nice to get someone with I/O knowledge, since there are tons of operations,
>>> optimize the storage part would be great, especially for people running on
>>> ssd, i found a thread on vfio reddit :
>>> Speaking of drives, it seems from what i read that it is possible for
>>> qemu/kvm to read a native (non virtualize) install of windows from a
>>> passthrough drive.
>>> If so is there something special to do? Might go to the hassle of
>>> reinstalling my all windows system by i prefer to be sure before touching
>>> anything (though i might just backup the image, boot my vm from it and
>>> clone windows, much easier than reinstalling).
>>> That would be glorious for comparative benchmarks since one will need to
>>> have two installs on the same type of drive to have the same configuration
>>> otherwise, 3dmark only runs on windows and heaven benchmark only load the
>>> gpu only so it is kind of useless in our case imo.
>>> Seem like a container is not a great idea after and that it would be
>>> better to have a full disk reserved for the vm, might be worth formating,
>>> not sure about that.
>>>> I think MADVISE hugepages doesn't directly hit the guest performance.
>>>> Though I find that using this option could help eliminating uneeded
>>>> hugepage requests on applications that do not gain benefit from hugepages.
>>>> So this option is more to have an efficient memory usage on the host,
>>>> rather than guest performance since the guest is already using a dedicated
>>>> hugepages (hugetlbfs).
>>> I was under the impression that classic hugepages could reserve memory
>>> to themselved thus doing rubbish with hugetlbs.
>>> You mean that by mounting hugepages the memory is hide from the host?
>>>> Don't forget to still enable Windows paging if your guest memory is
>>>> below the requirement. I've had low memory warning on Witcher 3 (I set the
>>>> guest memory to 8 GB, and it is still has 50%+ free memory) before I
>>>> reenabled back the Windows paging on C again. The other alternative is to
>>>> increase the guest memory. When I set it to 16 GB without Windows paging,
>>>> Witcher 3 didn't complain anymore.
>>> Windows paging? What is this?
>>> I attributed 8go of ram to the guest, that should be enough, i'm closely
>>> monitoring ressources comsuption with rivatuner server and i never get
>>> beyong 6go even when benchmarking.
>>> Since we are using virtio drivers from redhat, i wonder if updating them
>>> frequently (i don't know if there are frequents updates but still) might
>>> result in better performance.
>>> Speaking of which, i one breaks things while trying to update the
>>> drivers, i assume adding bootmenu option on libvirt allows to boot windows
>>> in safe mode right?
>>> vfio-users mailing list
>>> vfio-users at redhat.com
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the vfio-users