[linux-lvm] Repair thin pool

Mars kirapangzi at gmail.com
Wed Feb 17 02:48:23 UTC 2016


2016-02-10 18:32 GMT+08:00 Joe Thornber <thornber redhat com>:

> Yep, I definitely want these for upstream.  Send me what you've got,

> whatever state it's in; I'll happily spend a couple of weeks tidying

> this.

>

> - Joe

The feature was completed & workable, but the code is based on v0.4.1.

I need some days to clean up & rebase. Please wait.

syntax:

thin_ll_dump /dev/mapper/corrupted_tmeta [-o thin_ll_dump.xml]

thin_ll_restore -i edited_thin_ll_dump.xml -E

/dev/mapper/corrupted_tmeta -o /dev/mapper/fixed_tmeta

Ming-Hung Tsai

-------------

Hi,

Thank you very much for giving us so many advices.


Here are some progresses based on you guys mail conversations:

1,check metadata device:

[root at stor14 home]# thin_check /dev/mapper/vgg145155121036c-pool_nas_tmeta0
examining superblock
examining devices tree
examining mapping tree

2,dump metadata info:

[root at stor14 home]# thin_dump
/dev/mapper/vgg145155121036c-pool_nas_tmeta0 -o nas_thin_dump.xml -r
[root at stor14 home]# cat nas_thin_dump.xml
<superblock uuid="" time="1787" transaction="3545"
data_block_size="128" nr_data_blocks="249980672">
</superblock>

Compared with other normal pools, it seems like all device nodes and
mapping info in the metadata lv have lost.

Is there happened to be 'orphan nodes'? and could you give us your
semi-auto repair tools so we can repair it?


Thank you very much!

Mars
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20160217/3da7de91/attachment.htm>


More information about the linux-lvm mailing list