[linux-lvm] Possible bug in thin metadata size with Linux MDRAID

Gionatan Danti g.danti at assyoma.it
Wed Mar 8 16:14:05 UTC 2017


Hi list,
I would like to understand if this is a lvmthin metadata size bug of if 
I am simply missing something.

These are my system specs:
- CentOS 7.3 64 bit with kernel 3.10.0-514.6.1.el7
- LVM version 2.02.166-1.el7_3.2
- two linux software RAID device, md127 (root) and md126 (storage)

MD array specs (the interesting one is md126)
Personalities : [raid10]
md126 : active raid10 sdd2[3] sda3[0] sdb2[1] sdc2[2]
       557632000 blocks super 1.2 128K chunks 2 near-copies [4/4] [UUUU]
       bitmap: 1/5 pages [4KB], 65536KB chunk

md127 : active raid10 sdc1[2] sda2[0] sdd1[3] sdb1[1]
       67178496 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
       bitmap: 0/1 pages [0KB], 65536KB chunk

As you can see, /dev/md126 has a 128KB chunk size. I used this device to 
host a physical volume and volume group on which I created a thinpool of 
512GB. Then, I create a thin logical volume of the same size (512 GB) 
and started to fill it. Somewhere near (but not at) the full capacity, I 
saw the volume offline due to metadata exhaustion.

Let see how the logical volume was created and how it appear:
[root at blackhole ]# lvcreate --thin vg_kvm/thinpool -L 512G; lvs -a -o 
+chunk_size
   Using default stripesize 64.00 KiB.
   Logical volume "thinpool" created.
   LV               VG        Attr       LSize   Pool Origin Data% 
Meta%  Move Log Cpy%Sync Convert Chunk
   [lvol0_pmspare]  vg_kvm    ewi------- 128.00m 
                                  0
   thinpool         vg_kvm    twi-a-tz-- 512.00g             0.00   0.83 
                             128.00k
   [thinpool_tdata] vg_kvm    Twi-ao---- 512.00g 
                                  0
   [thinpool_tmeta] vg_kvm    ewi-ao---- 128.00m 
                                  0
   root             vg_system -wi-ao----  50.00g 
                                  0
   swap             vg_system -wi-ao----   7.62g 
                                  0

The metadata volume is quite smaller (~2x) than I expected, and not big 
enough to reach 100% data utilization. Indeed, thin_metadata_size show a 
minimum metadata volume size of over 130 MB:

[root at blackhole ]# thin_metadata_size -b 128k -s 512g -m 1 -u m
thin_metadata_size - 130.04 mebibytes estimated metadata area size for 
"--block-size=128kibibytes --pool-size=512gibibytes --max-thins=1"

Now, the interesting thing: by explicitly setting --chunksize=128, the 
metadata volume is 2x bigger (and in line with my expectations):
[root at blackhole ]# lvcreate --thin vg_kvm/thinpool -L 512G 
--chunksize=128; lvs -a -o +chunk_size
   Using default stripesize 64.00 KiB.
   Logical volume "thinpool" created.
   LV               VG        Attr       LSize   Pool Origin Data% 
Meta%  Move Log Cpy%Sync Convert Chunk
   [lvol0_pmspare]  vg_kvm    ewi------- 256.00m 
                                  0
   thinpool         vg_kvm    twi-a-tz-- 512.00g             0.00   0.42 
                             128.00k
   [thinpool_tdata] vg_kvm    Twi-ao---- 512.00g 
                                  0
   [thinpool_tmeta] vg_kvm    ewi-ao---- 256.00m 
                                  0
   root             vg_system -wi-ao----  50.00g 
                                  0
   swap             vg_system -wi-ao----   7.62g 
                                  0

Why I saw two very different metadata volume sizes? Chunksize was 128 KB 
in both cases; the only difference is that I explicitly specified it on 
the command line...

Thanks.

-- 
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti at assyoma.it - info at assyoma.it
GPG public key ID: FF5F32A8




More information about the linux-lvm mailing list