[libvirt-users] VM Performance using KVM Vs. VMware ESXi

Jatin Davey jashokda at cisco.com
Tue Apr 14 10:31:58 UTC 2015


On 4/14/2015 3:52 PM, Jatin Davey wrote:
> Hi All
>
> We are currently testing our product using KVM as the hypervisor. We 
> are not using KVM as a bare-metal hypervisor. We use it on top of a 
> RHEL installation. So basically RHEL acts as our host and using KVM we 
> deploy guests on this system.
>
> We have all along tested and shipped our application image for VMware 
> ESXi installations , So this it the first time we are trying our 
> application image on a KVM hypervisor.
>
> On this front i have done some tests to find out how our application's 
> response time is when deployed on KVM and then compare it with a VM 
> deployed on VMware ESXi. We have a benchmark test that basically loads 
> the application simulating a load of 100 parallel users logging into 
> the system and downloading reports. These tests basically use a HTTP 
> GET query to load the application VM. In addition to that i have taken 
> care to use the same hardware for both the tests , one with 
> RHEL(Host)+KVM and another with VMware ESXi. All the hardware 
> specifications for both the servers remain the same. The load test 
> also remains the same for testing with both the servers.
>
> First observation is that the average response time on the VMware ESXi 
> is : 500 milli-seconds while the application's average response time 
> when deployed using RHEL(Host)+ KVM is : 1050 milli-seconds. The 
> response time of the application when deployed on KVM is twice as much 
> as when it is deployed using VMware ESXi.
>
> I did few more tests to find which sub-system on these servers shows 
> varying metrics.
>
> First i started with IOZone to find out if there is any mismatch in 
> the speed with which data is read / written to the local disk on the 
> two VMs and found that "Read" speed in the VM that was deployed using 
> RHEL(Host)+KVM was twice as slow as the VM which was deployed using 
> VMware ESXi.
>
> For more on IoZone , Please refer : http://www.iozone.org/
>
> more specifically the following IoZone metrics were twice as less when 
> compared to the server running with VMware ESXi:
>
> Read
> Re-read
> Reverse-Read
> Stride Read
>
> Pread
>
>
> Note: I had run the IoZone tests on the VMs on both the servers.
>
> Second observation to be made was the output from the "top" command. I 
> could see that the VM deployed on RHEL(Host)+KVM was showing high 
> numbers for the following metrics when compared with the VM deployed 
> on VMware ESXi:
>
> load averages
> %sy for all the logical processors
> %si for all the logical processors
>
> i debugged further to find out which device is causing more interrupts 
> and found it to be "ide0" , See the output from the /proc/interrupts 
> file below:
> The other interrupts apart from ide0 are pretty much similar to the VM 
> deployed using VMware ESXi.
>
> ************/proc/interrupts *******************
> [root at localhost ~]# cat /proc/interrupts
>            CPU0       CPU1       CPU2       CPU3 CPU4       CPU5       
> CPU6       CPU7
>   0:     795827          0          0          0 0          0          
> 0          0    IO-APIC-edge  timer
>   1:         65          0          0          0 0          0          
> 0          0    IO-APIC-edge  i8042
>   6:          2          0          0          0 0          0          
> 0          0    IO-APIC-edge  floppy
>   8:          0          0          0          0 0          0          
> 0          0    IO-APIC-edge  rtc
>   9:          0          0          0          0 0          0          
> 0          0   IO-APIC-level  acpi
>  10:     425785          0          0          0 0          0          
> 0          0   IO-APIC-level  virtio0, eth0
>  11:         47          0          0          0 0          0          
> 0          0   IO-APIC-level uhci_hcd:usb1, HDA Intel
>  12:        730          0          0          0 0          0          
> 0          0    IO-APIC-edge  i8042
>  14:     188086          0          0          0 0          0          
> 0          0    IO-APIC-edge  ide0
> NMI:          0          0          0          0 0          0          
> 0          0
> LOC:     795813     795798     795783     795767     795752 795737     
> 795723     795709
> ERR:          0
> MIS:          0
> *********************************************
>
> Any pointers to improving the response time for the VM for 
> RHEL(Host)+KVM installation would be greatly appreciated.
>
> Thanks
> Jatin
>
>
>
> _______________________________________________
> libvirt-users mailing list
> libvirt-users at redhat.com
> https://www.redhat.com/mailman/listinfo/libvirt-users
Forgot to provide this information.

We are using RHEL(Host):

[root at localhost ~]# cat /etc/*release
LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Red Hat Enterprise Linux Server release 6.5 (Santiago)
Red Hat Enterprise Linux Server release 6.5 (Santiago)

and the Qemu being used is:

virsh # version
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 0.12.1

Thanks
Jatin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/libvirt-users/attachments/20150414/28767300/attachment.htm>


More information about the libvirt-users mailing list