[linux-lvm] Possible bug in thin metadata size with Linux MDRAID
zkabelac at redhat.com
Mon Mar 20 11:01:43 UTC 2017
Dne 20.3.2017 v 11:45 Gionatan Danti napsal(a):
> On 20/03/2017 10:51, Zdenek Kabelac wrote:
>> Please check upstream behavior (git HEAD)
>> It will still take a while before final release so do not use it
>> regularly yet (as few things still may change).
> I will surely try with git head and report back here.
>> Not sure for which other comment you look for.
> 1. you suggested that a 128 MB metadata volume is "quite good" for a 512GB
> volume and 128KB chunkgs. However, my tests show that a near full data volume
> (with *no* overprovisionig nor snapshots) will exhaust its metadata *before*
> really becoming 100% full.
> 2. On a MD RAID with 64KB chunk size, things become much worse:
> [root at gdanti-laptop test]# lvs -a -o +chunk_size
> LV VG Attr LSize Pool Origin Data% Meta%
> Move Log Cpy%Sync Convert Chunk
> [lvol0_pmspare] vg_kvm ewi------- 128.00m
> thinpool vg_kvm twi-a-tz-- 500.00g 0.00 1.58
> [thinpool_tdata] vg_kvm Twi-ao---- 500.00g
> [thinpool_tmeta] vg_kvm ewi-ao---- 128.00m
> root vg_system -wi-ao---- 50.00g
> swap vg_system -wi-ao---- 3.75g
> Thin metadata chunks are now at 64 KB - with the *same* 128 MB metadata
> volume size. Now metadata can only address ~50% of thin volume space.
> So, I am missing something or the RHEL 7.3-provided LVM has some serious
> problems identifing correct metadata volume size when running on top of a MD
> RAID device?
As said - please try with HEAD - and report back if you still see a problem.
There were couple issue fixed along this path.
In my test it seems 500G needs at least 258M with 64K chunksize.
On the other hand - it's never been documented that thin-pool without
monitoring is supposed to fit single LV AFAIK - it's basically needed that
user knows what he is using when he uses thin-provisioning - but of course
we continuously try to improve things to be more usable.
More information about the linux-lvm