On October 29, 2015 at 23:51:52, Ryan Flagler (ryan flagler gmail com) wrote:
There are archives of this list that you can view at .
Please use bottom posting to improve readability of the mailing lists.
As far as I know the Q35 provides a more recent architecture of a modern PC, while the i440FX is a more mature option.
The general rule of thumb for me is to use the i440FX by default, and use Q35 whenever i440FX doesn't support for the VM.
Which in cases like virtualizing a Mac OS X machine it needs the Q35 chipset.
I’ve tried Windows 10 on both i440FX and Q35 but see no significant performance gain if any exists.
It depends on how you want to manage the disk drives/images. A raw image (.img) tends to have better performance than a QEMU (qcow, qcow2) image, but QEMU image can do snapshots. Passing through a real disk (/dev/sd[x]) should have the same performance as the native, but it looses the flexibility to do migration. The benefit of using a real disk is that we can use that to boot the system natively, though needs to reinstall the drivers.
Another option is to use raw image on top of ZFS (or btrfs). It is the combination of raw performance, snapshotting capabilities of the underlying storage pool, compression, clones, thin provisioning, etc. Currently this is my approach on the storage side. I have 8 x 1 TB disks that I configure as a ZFS striped mirror pool (RAID10). On top of it I placed a dedicated ZFS dataset for each VM.
I think the bridge vs NAT really depends on your network topology that you want to expose not about the performance. I prefer my VMs to be the same network citizen class in my home network, so I always choose bridged networking. I’ll answer the VirtioNet part below your last post.
I can’t comment much on CPU pinning, since my host is only used by myself and I don’t experience a major performance breakdown on both the host and the VM when I’m using all my cores on the VM. If you’re not using NVIDIA GPU, the best CPU parameter should be:
That basically use the exact CPU spec on the host, hides the KVM CPUID exposed to NVIDIA driver, and apply several Hyper-V enlightenment performance. Multiplayer games (like Tera) seems to be more affected using these Hyper-V flags. The last Hyper-V flags needs a very recent QEMU from GIT repo or apply Alex’ patch from . I think you can skip the hv_vendor_id flag if you’re using AMD GPU.
As far as I know, the major difference between Seabios vs OVMF is comparing BIOS (legacy) vs UEFI (legacy free). The major downside of using Seabios if we use an Intel Graphics for the KVM host, is the VGA arbitration. See Alex’ explanation on .
Alex summarized this on .
On November 14, 2015 at 08:06:00, Ryan Flagler (ryan flagler gmail com) wrote:
I used the Python script qemu-mac-hasher from  to generate a consistent MAC address generation based on the name of a VM. A better performance than plain VirtioNet is by using vhost . This is my current line when enabling the virtual NIC of my VM.
-netdev tap,vhost=on,id=brlan -device virtio-net-pci,mac=$(/usr/local/bin/qemu-mac-hasher $VM_NAME),netdev=brlan
Using a passed through physical NIC should have better performance and less CPU work in doing emulation. Since you have 4 NICs, I think you’d be better to passthrough that to the VM.
Please correct me if I’m wrong.