<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="Times New Roman">Hi All<br>
<br>
We are currently testing our product using KVM as the hypervisor.
We are not using KVM as a bare-metal hypervisor. We use it on top
of a RHEL installation. So basically RHEL acts as our host and
using KVM we deploy guests on this system.<br>
<br>
We have all along tested and shipped our application image for
VMware ESXi installations , So this it the first time we are
trying our application image on a KVM hypervisor.<br>
<br>
On this front i have done some tests to find out how our
application's response time is when deployed on KVM and then
compare it with a VM deployed on VMware ESXi. We have a benchmark
test that basically loads the application simulating a load of 100
parallel users logging into the system and downloading reports.
These tests basically use a HTTP GET query to load the application
VM. In addition to that i have taken care to use the same hardware
for both the tests , one with RHEL(Host)+KVM and another with
VMware ESXi. All the hardware specifications for both the servers
remain the same. The load test also remains the same for testing
with both the servers.<br>
<br>
First observation is that the average response time on the VMware
ESXi is : 500 milli-seconds while the application's average
response time when deployed using RHEL(Host)+ KVM is : 1050
milli-seconds. The response time of the application when deployed
on KVM is twice as much as when it is deployed using VMware ESXi.<br>
<br>
I did few more tests to find which sub-system on these servers
shows varying metrics.<br>
<br>
First i started with IOZone to find out if there is any mismatch
in the speed with which data is read / written to the local disk
on the two VMs and found that "Read" speed in the VM that was
deployed using RHEL(Host)+KVM was twice as slow as the VM which
was deployed using VMware ESXi.<br>
<br>
For more on IoZone , Please refer : <a class="moz-txt-link-freetext" href="http://www.iozone.org/">http://www.iozone.org/</a><br>
<br>
more specifically the following IoZone metrics were twice as less
when compared to the server running with VMware ESXi:<br>
<br>
</font><font face="Times New Roman"> </font>
<table style="border-collapse: collapse;width:150pt" border="0"
cellpadding="0" cellspacing="0" width="199">
<colgroup><col
style="mso-width-source:userset;mso-width-alt:6958;width:150pt"
width="199"> </colgroup><tbody>
<tr style="height:14.5pt" height="19">
<td style="height:14.5pt;width:150pt" height="19" width="199">Read</td>
</tr>
<tr style="height:14.5pt" height="19">
<td style="height:14.5pt" height="19">Re-read</td>
</tr>
<tr style="height:14.5pt" height="19">
<td style="height:14.5pt" height="19">Reverse-Read</td>
</tr>
<tr style="height:14.5pt" height="19">
<td style="height:14.5pt" height="19">Stride Read</td>
</tr>
</tbody>
</table>
<table style="border-collapse: collapse;width:150pt" border="0"
cellpadding="0" cellspacing="0" width="199">
<colgroup><col
style="mso-width-source:userset;mso-width-alt:6958;width:150pt"
width="199"></colgroup><tbody>
<tr style="height:14.5pt" height="19">
<td style="height:14.5pt;width:150pt" height="19" width="199">Pread</td>
</tr>
</tbody>
</table>
<font face="Times New Roman"> <br>
Note: I had run the IoZone tests on the VMs on both the servers.<br>
<br>
Second observation to be made was the output from the "top"
command. I could see that the VM deployed on RHEL(Host)+KVM was
showing high numbers for the following metrics when compared with
the VM deployed on VMware ESXi:<br>
<br>
load averages<br>
%sy for all the logical processors<br>
%si for all the logical processors<br>
<br>
i debugged further to find out which device is causing more
interrupts and found it to be "ide0" , See the output from the
/proc/interrupts file below:<br>
The other interrupts apart from ide0 are pretty much similar to
the VM deployed using VMware ESXi.<br>
<br>
************/proc/interrupts *******************<br>
[root@localhost ~]# cat /proc/interrupts<br>
CPU0 CPU1 CPU2 CPU3 CPU4
CPU5 CPU6 CPU7<br>
0: 795827 0 0 0
0 0 0 0 IO-APIC-edge timer<br>
1: 65 0 0 0
0 0 0 0 IO-APIC-edge i8042<br>
6: 2 0 0 0
0 0 0 0 IO-APIC-edge floppy<br>
8: 0 0 0 0
0 0 0 0 IO-APIC-edge rtc<br>
9: 0 0 0 0
0 0 0 0 IO-APIC-level acpi<br>
10: 425785 0 0 0
0 0 0 0 IO-APIC-level virtio0, eth0<br>
11: 47 0 0 0
0 0 0 0 IO-APIC-level uhci_hcd:usb1,
HDA Intel<br>
12: 730 0 0 0
0 0 0 0 IO-APIC-edge i8042<br>
14: 188086 0 0 0
0 0 0 0 IO-APIC-edge ide0<br>
NMI: 0 0 0 0
0 0 0 0<br>
LOC: 795813 795798 795783 795767 795752
795737 795723 795709<br>
ERR: 0<br>
MIS: 0<br>
*********************************************<br>
<br>
Any pointers to improving the response time for the VM for
RHEL(Host)+KVM installation would be greatly appreciated.<br>
<br>
Thanks<br>
Jatin<br>
<br>
</font>
</body>
</html>