[Linux-cluster] Poor LVM performance.
a.holway at syseleven.de
Sat Dec 29 02:49:04 UTC 2012
I have been asking around about this for a while. I got the same results with CLVM with an iSCSI box I had on loan.
I have been doing some testing with KVM and Virtuozzo(containers based virtualisation) and various storage devices and have some results I would like some help analyzing. I have a nice big ZFS box from Oracle (Yes, evil but Solaris NFS is amazing). I have 10G and IB connecting these to my cluster. My cluster is four HP servers (E5-2670 & 144GB ram) with a RAID10 of 600k SAS drives.
Please open these pictures side by side.
You will notice that using KVM/LVM on the local RAID10 (and CLVM on iSCSI) completely destroys performance whereas the container based virtualisation stuff is awesome and as fast as the NFS.
4,8,12,16...VMs relates to the aggregate performance of the benchmark in that number of VMs. 4 = 1 VM on each node, 8 = 2 VM on each node. TPCC warehouses is the number of tpcc warehouses that the benchmark used. 1 warehouse is about 150MB so 10 warehouses would mean about 1.5GB of data being held in the innodb pool.
Why does LVM performance suck so hard compared to a single filesystem approach. What am I doing wrong?
More information about the Linux-cluster