[linux-lvm] LVM with large arrays, one large or mutiple smallPV's?

Bryn M. Reeves bmr at redhat.com
Wed Aug 13 18:01:23 UTC 2008

Hash: SHA1

Allen, Jack wrote:
>> On Behalf Of Bryn M. Reeves
>> Recent releases of LVM2 improve performance for large VGs as they
>> perform caching of metadata read from disk under some circumstances.
>> There were also a series of fixes to ensure that the tools behave
>> correctly when they find PVs with no metadata.
>> See earlier threads in the archives on this subject for more information
>> & examples of creating VGs with reduced numbers of metadata areas.
>> Regards,
>> Bryn.
> ============================================================
> You are saying it affects the performance of some of the LVM tools, but
> what about File Systems or databases that use the Logical Volume
> directly?

Correct. It has no effect on I/O performance - at least not in typical
real-world cases; you could probably construct a worst case scenario
where the VG is severely fragmented across 100s and 100s of PVs so tthat
I/O performance suffers but I have never in practice encountered this
situation (and it could just as easily occur in a VG containing a
smaller number of PVs with the right sequence of LV manipulations and
allocation policy).

On the other hand, poor tool performance with VGs containing 10s or 100s
of PVs has been reported in real world situations on numerous occasions
(particularly on s390/zVM systems using IBM DASD storage since these are
limited to 2GiB per volume so tend to be used in large numbers).

These problems tend not to be noticeable with modest numbers of PVs (~10
or so) although the impact can be measured. With hundreds of metadata
containing PVs the problem can be very noticeable - especially with LVM2
versions that do not include Milan's metadata caching enhancements.

Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Fedora - http://enigmail.mozdev.org


More information about the linux-lvm mailing list