[linux-lvm] Possible bug in thin metadata size with Linux MDRAID

Zdenek Kabelac zkabelac at redhat.com
Mon Mar 20 13:57:08 UTC 2017


Dne 20.3.2017 v 12:52 Gionatan Danti napsal(a):
> On 20/03/2017 12:01, Zdenek Kabelac wrote:
>>
>>
>> As said - please try with HEAD - and report back if you still see a
>> problem.
>> There were couple issue fixed along this path.
>>
>
> Ok, I tried now with tools and library from git:
>
> LVM version:     2.02.169(2)-git (2016-11-30)
> Library version: 1.02.138-git (2016-11-30)
> Driver version:  4.34.0
>
> I can confirm that now thin chunk size is no more bound (by default) by MD
> RAID chunk. For example, having created a ~500 GB MD RAID 10 array with 64 KB
> chunks, creating a thinpool shows that:
>
> [root at blackhole ~]# lvcreate --thinpool vg_kvm/thinpool -L 500G
> [root at blackhole ~]# lvs -a -o +chunk_size
>   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
>   LV               VG        Attr       LSize   Pool Origin Data% Meta%  Move
> Log Cpy%Sync Convert Chunk
>   [lvol0_pmspare]  vg_kvm    ewi------- 128.00m
>                                  0
>   thinpool         vg_kvm    twi-a-tz-- 500.00g             0.00   0.42
>                             256.00k
>   [thinpool_tdata] vg_kvm    Twi-ao---- 500.00g
>                                  0
>   [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m
>                                  0
>   root             vg_system -wi-ao----  50.00g
>                                  0
>   swap             vg_system -wi-a-----   7.62g
>
> Should I open a bug against the RHEL-provided packages?


Well if you want to get support for your existing packages - you would
need to go via 'GSS' channel.

You may open BZ - which will get closed with next release of RHEL7.4
(as you already confirmed upstream has resolved the issue).

Zdenek







More information about the linux-lvm mailing list