[vfio-users] Looking for performance tips, qemu machine type and pci/pcie-root questionning?

thibaut noah thibaut.noah at gmail.com
Fri Apr 15 06:00:19 UTC 2016


2016-04-15 6:13 GMT+02:00 Okky Hendriansyah <okky.htf at gmail.com>:
>
> I think Alex had mentioned about this, and if I recall correctly using
> pc-i440fx is preferrable since it is simpler and going to pc-q35 won't have
> any performance benefit. Currently I only use pc-q35 specifically just for
> my Hackintosh guest. I never done any benchmark between these two types
> recently though, so the result might change.
>
>

I just read alex's mail below yours, indeed you are right nothing changes,
so much fuss for nothing :/
Thanks for the clarification alex btw.


>
> According to one of the reddit users at /r/vfio [1], avoiding to use
> hv_vapic and hv_syncic in newer Intel CPUs starting Ivy Bridge-E onwards
> which has built-in Intel APICv will generally improve performance by
> reducing VM exits. Currently I'm using these options:
>
> *-cpu
> host,kvm=off,hv_time,hv_relaxed,hv_spinlocks=0x1fff,hv_vpindex,hv_reset,hv_runtime,hv_crash,hv_vendor_id=freyja*
>

I read that post too, though i don't have enough knowledge about
virtualization to really understand what this guy is talking about, i
bumped into this :
https://software.intel.com/en-us/blogs/2009/06/25/virtualization-and-performance-understanding-vm-exits
but no.
Will try your additionnal options while waiting to have the latest libvirt
version.

>
> Those two kernel configurations (1000 MHz and Voluntary) made my stuttery
> Garret to a butter smooth Garret ;). Other plus point is that ZFS, which I
> use extensively for the OS guest images prefers Voluntary also. [2]
>

I personnaly have my vm image in a qcwo2 container on a ssd, it would be
nice to get someone with I/O knowledge, since there are tons of operations,
optimize the storage part would be great, especially for people running on
ssd, i found a thread on vfio reddit :
https://m.reddit.com/r/VFIO/comments/43fbmy/discussion_optimal_storage_settings/

Speaking of drives, it seems from what i read that it is possible for
qemu/kvm to read a native (non virtualize) install of windows from a
passthrough drive.
If so is there something special to do? Might go to the hassle of
reinstalling my all windows system by i prefer to be sure before touching
anything (though i might just backup the image, boot my vm from it and
clone windows, much easier than reinstalling).

That would be glorious for comparative benchmarks since one will need to
have two installs on the same type of drive to have the same configuration
otherwise, 3dmark only runs on windows and heaven benchmark only load the
gpu only so it is kind of useless in our case imo.

Seem like a container is not a great idea after and that it would be better
to have a full disk reserved for the vm, might be worth formating, not sure
about that.


I think MADVISE hugepages doesn't directly hit the guest performance.
Though I find that using this option could help eliminating uneeded
hugepage requests on applications that do not gain benefit from hugepages.
So this option is more to have an efficient memory usage on the host,
rather than guest performance since the guest is already using a dedicated
hugepages (hugetlbfs).


I was under the impression that classic hugepages could reserve memory to
themselved thus doing rubbish with hugetlbs.
You mean that by mounting hugepages the memory is hide from the host?

>
> Don't forget to still enable Windows paging if your guest memory is below
> the requirement. I've had low memory warning on Witcher 3 (I set the guest
> memory to 8 GB, and it is still has 50%+ free memory) before I reenabled
> back the Windows paging on C again. The other alternative is to increase
> the guest memory. When I set it to 16 GB without Windows paging, Witcher 3
> didn't complain anymore.
>

Windows paging? What is this?
I attributed 8go of ram to the guest, that should be enough, i'm closely
monitoring ressources comsuption with rivatuner server and i never get
beyong 6go even when benchmarking.

Since we are using virtio drivers from redhat, i wonder if updating them
frequently (i don't know if there are frequents updates but still) might
result in better performance.
Speaking of which, i one breaks things while trying to update the drivers,
i assume adding bootmenu option on libvirt allows to boot windows in safe
mode right?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/vfio-users/attachments/20160415/94790284/attachment.htm>


More information about the vfio-users mailing list