[linux-lvm] LVM2 robustness w/ large (>100TB) name spaces?
marki at marki-online.net
Tue Dec 23 10:27:36 UTC 2008
Tuesday, December 23, 2008, 1:15:28, Steve Costaras wrote:
> - What are the limits on PE/LE's per logical volume (>200,000,000? A
> problem?) (I will be attaching multiple external chassis like above to
> several HBA's and will be using LVM striping to increase performance. So a
> small PE size (4MB-8MB) would be best to aid in the distribution of requests
> across the physical subsystems.)
I think 4-8 MB for PE size is too small when you will be using such
big (and probably advanced arrays).
LVM stripping (strip size in hundreds of kB) would kill any array,
because when you request for example 512 kB from one array and next
512 kB from another array, they can't handle it efficiently. You won't
see the benefit of reading from all 16 spindles - everytime it will
just load 512 kB from one physical disk. Also detection of sequential
read might not work well in array in this case.
In HP-UX LVM with enterprise arrays like HP EVA or HP XP we use 32-64 MB
PE and enable distribution - that means "stripe" size = PE size.
LE1 = PV1_1
LE2 = PV2_1
LE3 = PV1_2
LE4 = PV2_2 and so on.
Using this you request for example 32 MB from one array. Given the
cache sizes of arrays and readahead, so should get much better
performance, because those 32 MB will be fetched partially from all 16
Also we don't use ditribution among 2 arrays, just using different
paths to one array (different HBA, different SAN switch and different
array FC controller). We use 2 arrays only for mirroring data to other
datacentre for clusters.
The main reason for us for that PE distribution is that HP-UX does not
have loadbalancing multipath built-in. But even when you will have it,
using more PVs is better because of the architectural limits of arrays
(no. of outstanding request for single virtual drive, scsi queue depth
on server and on array, cache memory limits per virtual drive, etc.)
More information about the linux-lvm