[linux-lvm] probable lvm thin_pool exhaustion

Ming-Hung Tsai mingnus at gmail.com
Thu Mar 12 18:11:41 UTC 2020


According to step3, it sounds like the mapping tree is health, thus
the metadata could be simply repaired by lvconvert/thin_repair. The
error message might caused by the following reasons:
1. There are too many snapshots, which exhausted the capacity of
metadata spare. Expanding the metadata spare might work.
2. Bugs in thin_repair. What's the version of thin-provisioning-tools
you are using?

Also, before running lvconvert, I suggest to run thin_check first, to
check if the metadata is suitable for automatically repair or not.

$ lvchange -ay qubes_dom0/pool00_tmeta
$ thin_check /dev/mapper/qubes_dom0-pool00_tmeta
$ lvchange -an qubes_dom0/pool00_tmeta

(Maybe "lvconvert --repair" could provide options for setting repair
levels, to prevent novice users from discarding missing mappings.)

If you're not sure about the detail steps, you can upload compressed
metadata for further analysis:
$ lvchange -ay qubes_dom0/pool00_tmeta
$ dd if=/dev/mapper/qubes_dom0-pool00_tmeta of=tmeta.bin
$ tar -czf tmeta.tar.gz tmeta.bin

Finally, the options in step2 are for dmeventd to expand online
thin-pools. They make no help for expanding offline, broken
thin-pools, although that the VG is not full.

On Thu, Mar 12, 2020 at 4:14 PM <maiski at maiski.net> wrote:
>
> step 1:
> lvm vgscan vgchange -ay
> lvm lvconvert --repair qubes_dom0/pool00
> Result:
> using default stripesize 64.00 KiB.
> Terminate called after throwing an instance of 'std::runtime_error'
> what(): transaction_manager::new_block() couldn't allocate new block
> Child 7212 exited abnormally
> Repair of thin metadata volume of thin pool qubes_dom0/pool00 failed (status:1). Manual repair required!
>
> step 2:
> since i suspect that my lvm is full (though it does mark 15 g as free)
> i tried the following changes in the /etc/lvm/lvm.conf
...
> and tried step 1 again, did not work, got the same result as above with qubes_swap as active only
>
> step 3 tried
> lvextend -L+1G qubes_dom0/pool00_tmeta
> Result:
> metadata reference count differ for block xxxxxx, expected 0, but got 1 ...
> Check for pool qubes-dom/pool00 failed (status:1). Manual repair required!





More information about the linux-lvm mailing list