[vfio-users] vm lags/pauses when IO overloaded on host

Jiri 'Ghormoon' Novak ghormoon at gmail.com
Mon Jul 25 11:41:00 UTC 2016


Hi,

it still sound to me like some qemu misconfiguration, it doesn't seem
normal that the VM pauses and then fast-forwards the paused time. (eg.
if you watch spinning circle that does 1spin/s, when doing IO heavy
stuff, it will hang the machine completely for eg. 10s, you can't even
move mouse or anything. and then you see 10 spins in half a second.)
I think if there's IO wait, the guest should wait for IO inside. not pause.

usually I have enough arc, I an try to play with the logbias too, but
that doesn't sound as the correct solution as the VM pausing will not
solve, only appear less often.

an as for LVM, it might have less fragmentation so the problem will not
the this big (as ZFS is CoW), still zfs send/recv is way superior than
moving around/backing up LVM volumes and I don't have to worry that I
have 30 snapshots on it.

Regards,
Gh.

Samuel Holland wrote:
> On 07/20/2016 02:41 AM, Marcin Falkiewicz wrote:
>> ZFS's zvol is really bad for virtual disk backend when you have only
>> few HDDs (or even SSDs) - I had a lot problems with latency and
>> throughput on my 2-disk mirrored testbench. Hard to tell if it's ZFS
>> itself, ZoL or just not enough disks and slow controller.
>
> I haven't had any problems with zvols on a mirror of 7200RPM drives.
> It's not as snappy as my SSD, for sure, but it's reasonably fast, and I
> haven't noticed it bogging down the host.
>
> Some things to consider: ARC size and logbias. I have 16GiB of RAM, and
> 8GiB goes to the VM, so I limited my ARC to 4 GiB to avoid memory
> pressure issues on the host. Normally it would use half of RAM, but that
> would make host applications have to constantly reclaim memory from the
> cache.
>
> Second, I have logbias=throughput on the whole pool. I do not have a
> ZIL. There's a few pathological cases (on the host side) where I can get
> my desktop to lock up for a minute or two, such as rsyncing several
> hundred GiB of files from the SSD, but generally the performance is
> better than logbias=latency (which is the default). I find there's
> actually less stuttering. Plus it decreases fragmentation and makes my
> drives quieter (less seeking).
>
>> On the other hand, LVM runs great on single SSD (directsync+native)
>> or even RAID1 HDDs (mdadm; none+native), achieving close to bare
>> metal performance with virtio-scsi.
>
> Out of curiosity, are you using scsi-hd, scsi-block, or scsi-generic?
>
> -- 
> Regards,
> Samuel




More information about the vfio-users mailing list