[Linux-cluster] Fwd: CLVM exclusive mode

brem belguebli brem.belguebli at gmail.com
Fri Jul 31 19:29:49 UTC 2009


Hi,

Same behaviour as the one from Rafael.

Everything is coherent as long as you use the exclusive flag from the rogue
node, the locking does the job. Deactivating an already opened VG (mounted
lvol) is not possible either. How could this behave in case one used raw
devices instead of FS ?

But when you come to ignore the exclusive flag on the rogue node (vgchange
-a y vgXX) the locking is completely bypassed. It's definitely here that the
watchdog has to be (within the tools lvchange, vgchange, or at dlm level).

 below the output of the test:

node1 = nodeid 1
node2 = nodeid 2

node1:

vgchange -a ey vg11
  1 logical volume(s) in volume group "vg11" now active

[root at node1 ~]# lvs
  LV      VG     Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lvol1   vg11   -wi-a-  6.00G

[root at node1 ~]# ldebug

id nodeid remid pid xid exflags flags sts grmode rqmode time_ms r_nodeid
r_len r_name
39a0001 0 0 434 0 1 1 2 5 -1 0 0 64
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"

[root at node1 ~]# cdebug

Resource ffff81010abd6e00 Name (len=64)
"iZ8vgn7nBm05aMSo5cfpy63rflTqL2ryr3Xrp1prEGceCkA2dhSA2ENWocunEfdf"
Master Copy
Granted Queue
039a0001 EX
Conversion Queue
Waiting Queue

[root at node1 ~]# mount /dev/vg11/lvol1 /mnt

node2:

[root at node2 ~]# vgchange -a ey vg11
  Error locking on node node2: Volume is busy on another node
  0 logical volume(s) in volume group "vg11" now active

ldebug
 nothing
cdebug
 nothing


[root at node2 ~]# vgchange -a n vg11
  Error locking on node node1: LV vg11/lvol1 in use: not deactivating
  0 logical volume(s) in volume group "vg11" now active

# vg11/lvol1 is already mounted on node1 !

 [root at node2 ~]# vgchange -a y vg11
  1 logical volume(s) in volume group "vg11" now active

[root at node2 ~]# mount /dev/vg11/lvol1 /mnt
success
# ..it happens ! ;-)
Brem


2009/7/31, brem belguebli <brem.belguebli at gmail.com>:
>
> Hi Rafael,
>
> Good testing, it confirms that some additional barriers are necessary to
> prevent undesired behaviours.
>
> I'll test by tomorrow the same procedure at VG level.
>
>
>
>
>
> 2009/7/30 Rafael Micó Miranda <rmicmirregs at gmail.com>
>
>> Hi Brem
>>
>> El jue, 30-07-2009 a las 09:15 +0200, brem belguebli escribió:
>> > Hi,
>> >
>> > does it look like we're hiting some "undesired feature" ;-)
>> >
>> > Concerning the 0 nodeid, I think I read that on some Redhat documents
>> > or bugzilla report, I could find it out.
>> >
>> > Brem
>> >
>> >
>> >
>> >
>>
>> > --
>> > Linux-cluster mailing list
>> > Linux-cluster at redhat.com
>> > https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>>
>> I made some test on my lab environment too, i attach the results in the
>> TXT file.
>>
>> My conclusions:
>>
>> 1.- lovgols with exclusive flag must be used over clustered volume
>> groups (obvious and already known)
>> 2.- logvols activated with exclusive flag must be handled EXCLUSIVELY
>> with the exclusive flag
>>
>> ---> as part of my lvm-cluster.sh resource script, the exclusive flag is
>> part of the resource definition in cluster.conf so this is correctly
>> handled
>>
>> 3.- you can activate an already active exclusive logvol on any node if
>> you dont take into accout, during the activation, the exclusive flag
>> 4.- in use (opened) logvols are protected from deactivation from
>> secondary nodes, even from main node
>> 5.- after a node failure (hang-up, fencing...) logvol is not open
>> anymore, so it can be exclusively activated on a new node
>>
>> All this was tested manually, but this is the expected behaviour on
>> lvm-cluster.sh resource script.
>>
>> Link to lvm-cluster.sh resource script:
>>
>> https://www.redhat.com/archives/cluster-devel/2009-June/msg00020.html
>>
>> Cheers,
>>
>> Rafael
>>
>> --
>> Rafael Micó Miranda
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090731/54ee4942/attachment.htm>


More information about the Linux-cluster mailing list