[linux-lvm] LVM snapshot with Clustered VG

Vladislav Bogdanov bubble at hoster-ok.com
Wed Mar 6 09:35:23 UTC 2013


06.03.2013 12:15, Andreas Pflug wrote:
> Am 06.03.13 08:58, schrieb Vladislav Bogdanov:
>> 06.03.2013 10:40, Andreas Pflug wrote:
>>> Am 01.03.13 16:41, schrieb Vladislav Bogdanovr?
>>>
>>>> Hi Andreas,
>>>> Lock convertion is only enabled if you pass --force flag.
>>>> Also, to upgrade local lock to exclusive one, you need to ensure IIRC
>>>> that no more node holds local lock.
>>> Hm, tried that as well:
>>>
>>> tools/lvm lvchange --force -aey  -vvvv vg/locktest
>>>
>>> --force changes the error from "resource busy" to "invalid argument":
>> Is volume active on other nodes at that time?
> 
> I made sure it's not active on other nodes: lvchange -an vg/locktest ;
> lvchange -aly vg/locktest
>> And do you run clvmd from that build tree as well?
>>
>> Also, can you please try attached patch (on top of that one you have)? I
>> polished conversion a bit more, denying -an if volume is ex-locked
>> somewhere and other fixes to logic.
> I tried that additional patch. I'm running this test versions on my test
> node only (including clvmd), the other nodes are still running clvmd
> 2.2.95 (I guess this shouldn't matter since all are inactive). Same result:

I believe this matters, because error you see is received from a remote
node. Is node with ID 7400a8c0 local?

> 
> #lvchange.c:258     Activating logical volume "locktest" exclusively
> (forced)
> #activate/dev_manager.c:284         Getting device info for vg-locktest
> [LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN]
> #ioctl/libdm-iface.c:1724         dm info
> LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN
> NF   [16384] (*1)
> #activate/activate.c:1067       vg/locktest is active
> #activate/dev_manager.c:284         Getting device info for vg-locktest
> [LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN]
> #ioctl/libdm-iface.c:1724         dm info
> LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN
> NF   [16384] (*1)
> #activate/activate.c:1067       vg/locktest is active
> #activate/dev_manager.c:284         Getting device info for vg-locktest
> [LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN]
> #ioctl/libdm-iface.c:1724         dm info
> LVM-oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN
> NF   [16384] (*1)
> #activate/activate.c:1067       vg/locktest is active
> #locking/cluster_locking.c:513       Locking LV
> oW2W7O2cgWRLUhoVR8qqqQY7wlcYexmWU8y83bGQz9IcnXh3GfXslBN6ziZrC3BN EX
> (LV|NONBLOCK|CLUSTER|LOCAL|CONVERT) (0x40dd)
> #locking/cluster_locking.c:400   Error locking on node 7400a8c0: Invalid
> argument
> 
> 
>> This patch also allows locking (activation) to be performed on remote
>> nodes. I only tested this with corosync2 (which is set up in a way
>> latest pacemaker - post-1.1.8 git master - needs, nodes has additional
>> 'name' value in nodelist, please see
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-node-name.html).
>>
> 
> I'm running corosync 1.4.2 (debian wheezy).

Which cluster manager interface does clvmd detect? corosync or openais?
You should use former, openais one is(was) using LCK service which is
very unstable.

> 
> Regards,
> Andreas




More information about the linux-lvm mailing list