[linux-lvm] LVM with large arrays, one large or mutiple small PV's?

Bryn M. Reeves bmr at redhat.com
Wed Aug 13 10:09:56 UTC 2008

Hash: SHA1

Marek Podmaka wrote:
> I think from LVM side it doesn't matter if you have 10x PV of size 100
> GB or 1 PV of size 1 TB. But it really depends on the SAN disk array
> you have and how it handles each virtual disk. Does it have queues,
> caches for each vdisk or for the entire array? For example for HP EVA
> arrays it is recommended to prefer smaller vdisks if possible.

It does affect LVM - the performance of the LVM2 tools is affected by
the number of physical volumes in a volume group. Strictly speaking,
it's the number of metadata areas which defaults to one per physical
volume. This means that for large volume groups you can use the
"--metadatacopies" option to pvcreate to control the number of metadata
areas present (set it to zero for some PVs) and avoid the sluggish tool
performance you might otherwise see.

Recent releases of LVM2 improve performance for large VGs as they
perform caching of metadata read from disk under some circumstances.
There were also a series of fixes to ensure that the tools behave
correctly when they find PVs with no metadata.

See earlier threads in the archives on this subject for more information
& examples of creating VGs with reduced numbers of metadata areas.


Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org


More information about the linux-lvm mailing list