[linux-lvm] LVM thin LV filesystem superblock corruption

Andres Toomsalu andres at active.ee
Fri Mar 22 15:12:18 UTC 2013

Update! Issue seems to be active only with PERC H800 and MD1200 disks - local raid with PERC H700 and lvm thin lv-s work fine without corrupting on reboot.

We stumbled on strange lvm thinly provisioned LV filesystem corruption case - here are steps that reproduce the issue:

lvcreate --thinpool pool -L 8T --poolmetadatasize 16G VolGroupL1
lvcreate -T VolGroupL1/pool -V 2T --name thin_storage
mkfs.ext4 /dev/VolGroupL1/thin_storage
mount /dev/VolGroupL1/thin_storage /storage/

# NB! without host reboot unmount/mount succeeds!

[root at node3 ~]# mount /dev/VolGroupL1/thin_storage /storage/
mount: you must specify the filesystem type

Tried also to set poolmetadatasize to 2G, 14G, 15G and pool size to 1T, 2T - no change - corruption still happens.

Hardware setup: 
* Underlaying block device (sdb) is hosted by PERC H800 controller and disks are coming from SAS disk expansion box (DELL MD1200).

Some debug info:
[root at node3 ~]# lvs
  LV           VG         Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
  lv_root      VolGroup   -wi-ao--  50.00g
  lv_swap      VolGroup   -wi-ao--   4.00g
  pool         VolGroupL1 twi-a-tz   1.00t               0.00
  thin_storage VolGroupL1 Vwi-a-tz 100.00g pool          0.00

[root at node3 ~]# lvdisplay /dev/VolGroupL1/thin_storage
  --- Logical volume ---
  LV Path                /dev/VolGroupL1/thin_storage
  LV Name                thin_storage
  VG Name                VolGroupL1
  LV UUID                qla8Zf-FOdU-WB0j-SSdv-Xzpk-c9MS-gc97fc
  LV Write Access        read/write
  LV Creation host, time node3.oncloud.int, 2013-03-22 15:38:08 +0200
  LV Pool name           pool
  LV Status              available
  # open                 0
  LV Size                100.00 GiB
  Mapped size            0.00%
  Current LE             800
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

[root at node3 ~]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup     1   2   0 wz--n-  3.27t 3.22t
  VolGroupL1   1   2   0 wz--n- 10.91t 9.91t

[root at node3 ~]# vgdisplay VolGroupL1
  --- Volume group ---
  VG Name               VolGroupL1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  61
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.91 TiB
  PE Size               128.00 MiB
  Total PE              89399
  Alloc PE / Size       8208 / 1.00 TiB
  Free  PE / Size       81191 / 9.91 TiB
  VG UUID               2cHIOM-Rs9u-B5Mv-FaZv-KORq-mrTk-QIGfoG

[root at node3 ~]# pvs
  PV         VG         Fmt  Attr PSize  PFree
  /dev/sda2  VolGroup   lvm2 a--   3.27t 3.22t
  /dev/sdb   VolGroupL1 lvm2 a--  10.91t 9.91t

[root at node3 ~]# pvdisplay /dev/sdb
  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               VolGroupL1
  PV Size               10.91 TiB / not usable 128.00 MiB
  Allocatable           yes
  PE Size               128.00 MiB
  Total PE              89399
  Free PE               81191
  Allocated PE          8208
  PV UUID               l3ROps-Aar9-wSUO-ypGj-Wwi1-G0Wu-VqDs1a

What could be the issue here?

Andres Toomsalu, andres at active.ee

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20130322/4413bb79/attachment.htm>

More information about the linux-lvm mailing list