[linux-lvm] LVM causing IO contention or slowdown?

Austin Gonyou austin at coremetrics.com
Sat Jan 19 01:33:01 UTC 2002


As a follow up to this I've done some more testing, and will test the
rest of this weekend, using AIM db benchmark. 

What I've found is that when mounting with logbufs=8 on my Quad Xeon 2MB
cache 6450 with 8 Ultra-2 Drives, 4 RAID0 volumes, two of 3 disks each
and two of 1 disk each. 

The larger Volumes, the more "costly" volumes, when mounted using
logbufs=8, outperformed the same volume addressed when not under LVM
control. Not only did it out perform itself, but either WITH or WITHOUT
LVM controlling the volume, I nearly doubled my throughput to those
drives. 

Here is what I'm talking about:

#With LVM management
Throughput 109.715 MB/sec (NB=137.144 MB/sec  1097.15 MBit/sec)  200
procs

#Without LVM management
Throughput 100.803 MB/sec (NB=126.004 MB/sec  1008.03 MBit/sec)  200
procs


This only happened once though and I'm not sure exactly why. Still, the
max I could get out of it for the same test more than once with LVM
enabled was around 80-85 MB/s. Still a HUGE improvement over 43/44 MB/s.


On Fri, 2002-01-18 at 22:29, Austin Gonyou wrote:
> Some disturbing things about LVM I've found. I'm not sure if this has
> anything to do with XFS or not, I'm still working on that part. 
> 
> Anyway, I've got an AMI MegaRAID 1600(or so, Dell PERC2/DC, and a Dell
> PERC3/DC[also ami/lsi]). I conducted a test with dbench in the following
> manner: 
> 
> 
> 1. create a 3 drive RAID0 on the PERC. 
> 2. mkfs.xfs the new "drive" as the OS sees it. (/dev/sdxx) 
> 3. mount /dev/sdxx /mnt/test 
> 4. chown austin:austin /mnt/test 
> <login as austin> 
> 5. cd /mnt/test 
> 6. cp ~/dbench/client.txt . 
> 7. dbench 200 (ended up at 29MB/s) 
> 8. dbench 150 (ended up at 31MB/s) 
> <login as root> 
> 9. umount /mnt/test 
> 10. fdisk /dev/sdxx 
> 11. 'T' for partition type 
> 12. '1' for partition # 
> 13. '8e' for LVM and 'w' to save partition config. 
> 14. pvcreate /dev/sdxx;vgcreate -Ay testvg /dev/sdxx 
> 15. lvcreate -Cy -L 109G -n testlv testvg 
> 16. mkxfs.xfs /dev/testvg/testlv 
> 17. mount /dev/testvg/testlv /mnt/test 
> 18. chown austin:austin /mnt/test 
> <login as austin> 
> 19. cp ~/dbench/client.txt . 
> 20. dbench 200 (ended up at 54MB/s) 
> 21. dbench 150 (ended up at 61MB/s) 
> 
> The disk which was made into a pv you see up there is /dev/sdxx. That
> means it was a partition. I also set it to be contiguous. This could be
> very bad for the dbench kind of test. I'm not sure. Next I did something
> similar to the above, but I removed all partitions from the physical
> device and recreated the volume using just /dev/sdx. That said, I
> retested just now and got the following: 
> 
> o Throughput 49.733 MB/sec (NB=62.1662 MB/sec  497.33 MBit/sec)  200
> procs 
> o Throughput 43.5752 MB/sec (NB=54.469 MB/sec  435.752 MBit/sec)  150
> procs (not sure about why this is) 
> 
> and then on a whim... 
> 
> o Throughput 37.4975 MB/sec (NB=46.8719 MB/sec  374.975 MBit/sec)  250
> procs 
> and again... 
> o Throughput 43.4332 MB/sec (NB=54.2916 MB/sec  434.332 MBit/sec)  150
> procs 
> 
> Either way, these results are a far cry from what was shown above using
> Contiguous on a partition, instead of a whole disk. If there's a way to
> improve this, Please advise.
> 
> And then I removed the volume from LVM control, and tested again: 
> 
> o Throughput 51.2596 MB/sec (NB=64.0745 MB/sec  512.596 MBit/sec)  200
> procs
> oThroughput 52.0488 MB/sec (NB=65.061 MB/sec  520.488 MBit/sec)  150
> procs
> 
> Not as huge a jump as last night, but still rather far above what it is
> with LVM turned on. 
> 
> Anyone know why this might be?
> 
> 
> On Fri, 2002-01-18 at 17:08, karlheg at microsharp.com wrote: 
> > 
> >  I had visions of using XFS on LVM volumes for our product.  I would
> >  like to start with some default LV sizes, and have the ability to
> >  shrink and grow them as needed to meet individual requirements.  For
> >  this, I will need to use ext3fs.  (and very stable LVM; still
> >  learning about it.)
> -- 
> Austin Gonyou
> Systems Architect, CCNA
> Coremetrics, Inc.
> Phone: 512-698-7250
> email: austin at coremetrics.com
> 
> "It is the part of a good shepherd to shear his flock, not to skin it."
> Latin Proverb
-- 
Austin Gonyou
Systems Architect, CCNA
Coremetrics, Inc.
Phone: 512-698-7250
email: austin at coremetrics.com

"It is the part of a good shepherd to shear his flock, not to skin it."
Latin Proverb
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 232 bytes
Desc: This is a digitally signed message part
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20020119/6d7ac123/attachment.sig>


More information about the linux-lvm mailing list