[linux-lvm] Thin pool vg1-thinpool1-tpool (253:3) transaction_id is 549, while expected 505.

Łukasz Czerpak lukasz.czerpak at gmail.com
Mon Dec 9 14:18:36 UTC 2019


hi,

Sure, I will update the kernel as per your recommendation. Thank you for help and prompt replies!
In regards to “sharing thin-pool” - there are no VMs, only LXD that is using VG and thin-pool. After digging more I found relevant article:

https://discuss.linuxcontainers.org/t/is-it-safe-to-create-an-lvm-backed-storage-pool-that-can-be-shared-with-other-logical-volumes/5658/5 <https://discuss.linuxcontainers.org/t/is-it-safe-to-create-an-lvm-backed-storage-pool-that-can-be-shared-with-other-logical-volumes/5658/5>

This might be the reason. I will investigate it more and share results here.

--
pozdrawiam,
Łukasz Czerpak




> On 9 Dec 2019, at 14:59, Zdenek Kabelac <zkabelac at redhat.com> wrote:
> 
> Dne 09. 12. 19 v 14:50 Łukasz Czerpak napsal(a):
>> hi,
>> It’s Ubuntu 18.04.3:
>> $ lvm version
>>   LVM version:     2.02.176(2) (2017-11-03)
>>   Library version: 1.02.145 (2017-11-03)
>>   Driver version:  4.37.0
>> $ uname -a
>> Linux gandalf 4.15.0-72-generic #81-Ubuntu SMP Tue Nov 26 12:20:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
>> It’s weird as same error occurred few minutes ago. I wanted to take snapshot of thin volume and it first returned the following error:
>> $ lvcreate -s --name vmail-data-snapshot vg1/vmail-data
>> Using default stripesize 64.00 KiB.
>> Can't create snapshot vmail-data-snapshot as origin vmail-data is not suspended.
>> Failed to suspend thin snapshot origin vg1/vmail-data.
>> Then I tried with different volume:
>> $ lvcreate -s --name owncloud-data-snapshot vg1/owncloud-data
>> Using default stripesize 64.00 KiB.
>> Thin pool vg1-thinpool1-tpool (253:2) transaction_id is 574, while expected 572.
>> Failed to suspend vg1/thinpool1 with queued messages.
>> Same error when then tried to export LXD’s container:
> 
> Hi
> 
> While I'd highly recommend to move to kernel 4.20 (at least) - from name of your volumes - it does look like you are using thinp in some 'cloud' environment.
> 
> For thin-pool it's critically important to always have thin-pool active only on a single machine.  You must never run thin-pool activate on multiple machines (even if one machine is not using it - but just has it active).
> 
> So we have seen already many times user have actived thin-pool on their host machines and then passed devices into virtual machines and used there the same thin-pool (so thin-pool has been activate multiple times at the same time).
> 
> So please carefully check if this is not your case - as this would nicely explain why your 'transaction_id' got so much different.
> 
> Regards
> 
> Zdenek

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-lvm/attachments/20191209/a69ee397/attachment.htm>


More information about the linux-lvm mailing list