[linux-lvm] Performance impact of LVM

Sander Smeenk ssm+lvm at freshdot.net
Fri Jul 28 07:24:13 UTC 2006


Quoting Lamont R. Peterson (peregrine at openbrainstem.net):

> > Then i made a scsi_vg01, with all the scsi disks and a ide_vg01 with all
> > the ide disks, and started lvcreating "partitions" inside those vg's.
> Any particular reason to not include all the disks in a single VG?

Well, SCSI is usually faster than IDE. So the SCSI disks are used for
storing databases, websitecontent, etc. IDE disks are only used
for booting and storing local backup copies, etc...

Although with the latest (pata) IDE disks the performance difference
already got really small ;)

> Also, this setup will actually leave you more vulnerable to single disk 
> failures.  I would *highly* recommend using RAID to aggregate your disks 
> together, then use LVM on top of that to make things manageable.

Mmmh, yeah, that's true. Although it doesn't really matter if one of
these servers would fail due to diskproblems. There's enough of them to
take the traffic ;)

I should fix that sometime soon though ;)

> The only "extra" LVM I/O done is when you are (re)configuring LVM.  Things 
> like creating, resizing & deleting an LV require a little bit of disk I/O, of 
> course.  Other than the small amount of overhead when using snapshot volumes, 
> there isn't any other impact on I/O performance.

OK, really clear explanation. Luckily it matches my idea about LVM, and
hopefuly with the two responses i've gotten i can convince my superiors
that LVM will not cause any performance impact.

> However, I wonder if the LVM address look-up code is better than, equal to or 
> any worse than that for a plain block device (e.g. partition, loopback 
> mounted file, etc.).  If there is a statistically relevent delta there, I 
> think it would only impact I/O latency and even then, it couldn't be much.

Would using bonnie++ on a 'plain block device' be a good enough way to
possibly measure that? ;-)   I think the latency is so small that it
won't show ;)

> When booting your system, it does have to take a moment and "vgscan" for VGs.  
> This is pretty fast, but it adds a second or two to your bootup time.

Haha. That's NO problem compared to the time it takes the BIOS to detect
disks, load the SATA bios scan for disks, the Adaptec bios, scan for
disks, the two networkcards, blah ;)

> That's all I can think of off the top of my head.  HTH.

TY! Really appreciated.

Kind regards,
Sander.
-- 
| It was a business doing pleasure with you!
| 1024D/08CEC94D - 34B3 3314 B146 E13C 70C8  9BDB D463 7E41 08CE C94D
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20060728/81e19180/attachment.sig>


More information about the linux-lvm mailing list