[libvirt-users] Lots of threads and increased IO load

Gergely Horváth gergely.horvath at inepex.com
Wed Nov 13 16:07:12 UTC 2013


On 2013-11-13 16:38, Eric Blake wrote:
> I don't know if qemu exposes a knob for limiting the number of aio
> helper threads it can spawn, or even if that is a good idea.

Those threads do not cause any problems, the host is dealing with a lot
of them and has no problem (CPU usage and load is incredibly low in
practice). Thank you for the clarification, I see why are there more
threads sometimes.

> What domain XML are you using?  Yes, there are different disk cache
> policies (writethrough vs. none) which have definite performance vs.
> risk tradeoffs according to the amount of IO latency you want the guest
> to see; but again, the qemu list may be a better help in determining
> which policy is best for your needs.  Once you know the policy you want,
> then we can help you figure out how to represent it in the domain XML.

I do not know exactly what is your question, but here is the relevant
part of one the guests:

<domain type='kvm' id='10'>
  <name>...</name>
  <uuid>...</uuid>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-1.6'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>SandyBridge</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='pbe'/>

	...

    <feature policy='require' name='monitor'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/ssd/vmstorage/web1.raw'/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/ssd/vmstorage/web1-1.swap'/>
      <target dev='vdc' bus='virtio'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
    </disk>

	...

    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='none'/>
</domain>

Currently, the running guests have no "cache" parameter passed to qemu,
so I guess they are using the default qemu setting which is writethrough
according to the QEMU wiki.
(http://en.wikibooks.org/wiki/QEMU/Devices/Storage)

As I understand, moving forward towards more risk and more "performance"
I can try to experiment with "writeback" then?

i.e. <driver name='qemu' type='raw' cache="writeback" />

Cheers.

--
Üdvözlettel / Best regards

Horváth Gergely | gergely.horvath at inepex.com

IneTrack - Nyomkövetés egyszerűen | Inepex Kft.
Ügyfélszolgálat: support at inetrack.hu | +36 30 825 7646 | support.inetrack.hu
Web: www.inetrack.hu | nyomkovetes-blog.hu | facebook.com/inetrack

Inepex - The White Label GPS fleet-tracking platform | www.inepex.com




More information about the libvirt-users mailing list