[linux-lvm] Possible bug in thin metadata size with Linux MDRAID

Gionatan Danti g.danti at assyoma.it
Mon Mar 20 10:45:11 UTC 2017


On 20/03/2017 10:51, Zdenek Kabelac wrote:
>
> Please check upstream behavior (git HEAD)
> It will still take a while before final release so do not use it
> regularly yet (as few things still may  change).

I will surely try with git head and report back here.

>
> Not sure for which other comment you look for.
>
> Zdenek
>
>
>

1. you suggested that a 128 MB metadata volume is "quite good" for a 
512GB volume and 128KB chunkgs. However, my tests show that a near full 
data volume (with *no* overprovisionig nor snapshots) will exhaust its 
metadata *before* really becoming 100% full.

2. On a MD RAID with 64KB chunk size, things become much worse:
[root at gdanti-laptop test]# lvs -a -o +chunk_size
   LV               VG        Attr       LSize   Pool Origin Data% Meta%
Move Log Cpy%Sync Convert Chunk
   [lvol0_pmspare]  vg_kvm    ewi------- 128.00m
                                 0
   thinpool         vg_kvm    twi-a-tz-- 500.00g             0.00   1.58
                             64.00k
   [thinpool_tdata] vg_kvm    Twi-ao---- 500.00g
                                 0
   [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m
                                 0
   root             vg_system -wi-ao----  50.00g
                                 0
   swap             vg_system -wi-ao----   3.75g
                                 0

Thin metadata chunks are now at 64 KB - with the *same* 128 MB metadata
volume size. Now metadata can only address ~50% of thin volume space.

So, I am missing something or the RHEL 7.3-provided LVM has some serious 
problems identifing correct metadata volume size when running on top of 
a MD RAID device?

Thanks.


-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8




More information about the linux-lvm mailing list