From vm at sykora.cz Mon Nov 2 08:34:25 2015 From: vm at sykora.cz (Vladimir Martinek) Date: Mon, 2 Nov 2015 09:34:25 +0100 Subject: [Linux-cluster] Two cluster nodes hold exclusive POSIX lock on the same file In-Reply-To: <20151030201222.GD14890@redhat.com> References: <563378F2.6090801@sykora.cz> <20151030201222.GD14890@redhat.com> Message-ID: <56372011.2080507@sykora.cz> Thank you, understood it is working as expected. But how is it ensured that the two nodes holding the lock can't write to the same file then? Thank you Vladimir Martinek On 10/30/2015 09:12 PM, David Teigland wrote: > On Fri, Oct 30, 2015 at 03:04:34PM +0100, Vladimir Martinek wrote: >> Hello, >> >> I have a 3 node cluster and fencing agent that takes about 30 >> seconds to complete the fencing. In those 30 seconds it is possible >> for two nodes of the cluster to get exclusive POSIX lock on the same >> file. >> >> Did I miss something here or is this correct behaviour? >> >> Also, when trying with BSD flock, it works as I would expect - the >> locks are only released after the fencing completes and node 1 is >> confirmed to be fenced. >> >> Following is output of dlm_tool dump command. Watch for the line >> "gfs2fs purged 1 plocks for 1" - the locks of failed node 1 are >> purged long before the fencing is completed. >> >> Thank you for any advice. > It works as expected; recovery of posix locks does not need to wait for > fencing to complete. > Dave > -- *Ing. Vladim?r Martinek* Programmer T: +420 723 908 968 @: vm at sykora.cz Sykora Data Center s.r.o. 28. ??jna 1512/123, 702 00 Ostrava www.sykora.cz -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: idabedag.png Type: image/png Size: 5137 bytes Desc: not available URL: From jayesh.shinde at netcore.co.in Mon Nov 2 14:39:50 2015 From: jayesh.shinde at netcore.co.in (Jayesh Shinde) Date: Mon, 2 Nov 2015 20:09:50 +0530 Subject: [Linux-cluster] fence_cisco_ucs issue within cluster.conf Message-ID: <563775B6.1060503@netcore.co.in> Hi , I am trying to configure the 2 node cluster with fence_cisco_ucs . The fence testing is working properly via command line but its not working within cluster.conf problem / scenario :-- ---------------------------- When I manually shutdown the Ethernet card of of mailbox1 server , then mailbox2 server detect network failure and trying to fence mailbox1 , But its getting fail with "plug" related error ( refer below log ) i.e Failed: Unable to obtain correct plug status or plug is not available I refer Redhat KB and google and older mail-thread . As per suggestion I upgrade the from "fence-agents-3.1.5-35.el6" to "fence-agents-4.0.15-8.el6.x86_64". Also checked by doing few other changes in cluster.conf , but that not worked . Kindly guide where I am doing wrong with "plug" ? I am using OS is RHEL 6.5 ------------------------------------- cman-3.0.12.1-59.el6.x86_64 rgmanager-3.0.12.1-19.el6.x86_64 fence-virt-0.2.3-15.el6.x86_64 fence-agents-4.0.15-8.el6.x86_64 command line fencing :-- ----------------------------- [root at mailbox2 ~]# /usr/sbin/fence_cisco_ucs -a 172.17.1.30 -l KVM -p 'myPassword' -o status -v -z --plug=mailbox1 --ipport=443 suborg="/org-root/ls-mailbox" ; *echo $?* *Status: ON* *0* /etc/hosts on both mailbox1 & mailbox2 server 127.0.0.1 localhost localhost.localdomain 192.168.51.91 mailbox1.mydomain.com 192.168.51.92 mailbox2.mydomain.com /etc/cluster/cluster.conf :-- ------------------------------------