From ccaulfie at redhat.com Tue Mar 1 08:38:46 2016
From: ccaulfie at redhat.com (Christine Caulfield)
Date: Tue, 1 Mar 2016 08:38:46 +0000
Subject: [Linux-cluster] make fails "config/libs/libccsconfdb/libccs.so:
undefined reference to `confdb_key_iter_typed2'"
In-Reply-To: <56D38441.5040603@richtercloud.de>
References: <56D38441.5040603@richtercloud.de>
Message-ID: <56D55516.3040304@redhat.com>
it looks like you're trying to build the cman-based cluster services for
corosync 2, that's not supported. If you want cman then you have to use
corosync 1.
However the latest, and maintained, cluster code is corosync 2 + pacemaker.
Chrissie
On 28/02/16 23:35, Karl-Philipp Richter wrote:
> `./configure && make` fails with
>
> cc -o ccs_tool ccs_tool.o editconf.o
> -L/mnt/main/sources/cluster/config/libs/libccsconfdb -lccs `xml2-config
> --libs` -L/usr/lib
> /mnt/main/sources/cluster/config/libs/libccsconfdb/libccs.so:
> undefined reference to `confdb_key_iter_typed2'
> /mnt/main/sources/cluster/config/libs/libccsconfdb/libccs.so:
> undefined reference to `confdb_key_get_typed2'
> collect2: error: ld returned 1 exit status
> Makefile:29: recipe for target 'ccs_tool' failed
> make[3]: *** [ccs_tool] Error 1
>
> See https://travis-ci.org/krichter722/cluster/builds/112480859 for
> details. Also experienced on Ubuntu 15.10.
>
> experienced with cluster-3.2.0-25-g720cbde
>
From krichter at posteo.de Tue Mar 1 15:50:31 2016
From: krichter at posteo.de (Karl-Philipp Richter)
Date: Tue, 1 Mar 2016 16:50:31 +0100
Subject: [Linux-cluster] How to undo vgchange --clustered y?
Message-ID: <56D5BA47.1090703@posteo.de>
Hi,
I invoked `vgchange --clustered y [name]` and accepted the warning that
the volume group might become inaccessible by mistaking inaccessible
with unavailable for other cluster nodes. Since `clvm` doesn't work on
Ubuntu 15.10 and building from source is
painful[http://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10]
I seem to have no chance to ever access the clustered volume group. Is
there any solution to make the volume group accessible again?
-Kalle
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL:
From rpeterso at redhat.com Tue Mar 1 16:02:11 2016
From: rpeterso at redhat.com (Bob Peterson)
Date: Tue, 1 Mar 2016 11:02:11 -0500 (EST)
Subject: [Linux-cluster] How to undo vgchange --clustered y?
In-Reply-To: <56D5BA47.1090703@posteo.de>
References: <56D5BA47.1090703@posteo.de>
Message-ID: <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com>
----- Original Message -----
> Hi,
> I invoked `vgchange --clustered y [name]` and accepted the warning that
> the volume group might become inaccessible by mistaking inaccessible
> with unavailable for other cluster nodes. Since `clvm` doesn't work on
> Ubuntu 15.10 and building from source is
> painful[http://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10]
> I seem to have no chance to ever access the clustered volume group. Is
> there any solution to make the volume group accessible again?
>
> -Kalle
Hi Kalle,
I had this problem once a long time ago.
I think what I did was: I exported the LUN on the SAN to a working
cluster with clvmd, and did vgchange -cn from there. You probably don't
have that option. You could try vgchange -fff, but I haven't tried it myself
so I don't know if it will work.
Regards,
Bob Peterson
Red Hat File Systems
From bmr at redhat.com Tue Mar 1 16:33:05 2016
From: bmr at redhat.com (Bryn M. Reeves)
Date: Tue, 1 Mar 2016 16:33:05 +0000
Subject: [Linux-cluster] How to undo vgchange --clustered y?
In-Reply-To: <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com>
References: <56D5BA47.1090703@posteo.de>
<9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com>
Message-ID: <20160301163304.GA29278@hex.redhat.com>
On Tue, Mar 01, 2016 at 11:02:11AM -0500, Bob Peterson wrote:
> I had this problem once a long time ago.
> I think what I did was: I exported the LUN on the SAN to a working
> cluster with clvmd, and did vgchange -cn from there. You probably don't
> have that option. You could try vgchange -fff, but I haven't tried it myself
> so I don't know if it will work.
The easiest and safest way to do it is to use lvm2's --config switch
to temporarily disable clustered locking for the vgchange command.
Of course, before doing this, you should check to ensure that the
cluster really is disabled/inquorate or otherwise unusual (or in
the case of an accidental "vgchange -cy" that it doesn't exist :).
The command looks like:
# vgchange -cn $vgname --config 'global {locking_type = 0}'
Where $vgname is the name of the VG to modify.
If you're on Red Hat and have a portal account there's some
additional detail in the following kbase article:
https://access.redhat.com/solutions/3618
Regards,
Bryn.
From teigland at redhat.com Tue Mar 1 16:35:07 2016
From: teigland at redhat.com (David Teigland)
Date: Tue, 1 Mar 2016 10:35:07 -0600
Subject: [Linux-cluster] How to undo vgchange --clustered y?
In-Reply-To: <56D5BA47.1090703@posteo.de>
References: <56D5BA47.1090703@posteo.de>
Message-ID: <20160301163507.GA7457@redhat.com>
On Tue, Mar 01, 2016 at 04:50:31PM +0100, Karl-Philipp Richter wrote:
> Hi,
> I invoked `vgchange --clustered y [name]` and accepted the warning that
> the volume group might become inaccessible by mistaking inaccessible
> with unavailable for other cluster nodes. Since `clvm` doesn't work on
> Ubuntu 15.10 and building from source is
> painful[http://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10]
> I seem to have no chance to ever access the clustered volume group. Is
> there any solution to make the volume group accessible again?
vgchange -cn --config global/locking_type=0 vgname
From krichter at posteo.de Fri Mar 4 16:02:02 2016
From: krichter at posteo.de (Karl-Philipp Richter)
Date: Fri, 4 Mar 2016 17:02:02 +0100
Subject: [Linux-cluster] How to undo vgchange --clustered y?
In-Reply-To: <20160301163304.GA29278@hex.redhat.com>
References: <56D5BA47.1090703@posteo.de>
<9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com>
<20160301163304.GA29278@hex.redhat.com>
Message-ID: <56D9B17A.3080407@posteo.de>
Hi,
Am 01.03.2016 um 17:33 schrieb Bryn M. Reeves:
> The command looks like:
>
> # vgchange -cn $vgname --config 'global {locking_type = 0}'
>
> Where $vgname is the name of the VG to modify.
That worked perfectly. Thanks a lot!
-Kalle
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL:
From shreekant.jena at gmail.com Sat Mar 5 06:46:22 2016
From: shreekant.jena at gmail.com (Shreekant Jena)
Date: Sat, 5 Mar 2016 12:16:22 +0530
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
Message-ID:
Dear All,
I have a 2 node cluster but after reboot secondary node is showing offline
. And cman failed to start .
Please find below logs on secondary node:-
root at EI51SPM1 cluster]# clustat
msg_open: Invalid argument
Member Status: Inquorate
Resource Group Manager not running; no service information available.
Membership information not available
[root at EI51SPM1 cluster]# tail -10 /var/log/messages
Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
Connection refused
Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
connection.
Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
Connection refused
Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
connection.
Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
Connection refused
Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
connection.
Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
Connection refused
Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
[root at EI51SPM1 cluster]#
[root at EI51SPM1 cluster]# cman_tool status
Protocol version: 5.0.1
Config version: 166
Cluster name: IVRS_DB
Cluster ID: 9982
Cluster Member: No
Membership state: Joining
[root at EI51SPM1 cluster]# cman_tool nodes
Node Votes Exp Sts Name
[root at EI51SPM1 cluster]#
[root at EI51SPM1 cluster]#
Thanks & regards
SHREEKANTA JENA
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lists at alteeve.ca Sat Mar 5 06:47:59 2016
From: lists at alteeve.ca (Digimer)
Date: Sat, 5 Mar 2016 01:47:59 -0500
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
In-Reply-To:
References:
Message-ID: <56DA811F.6090906@alteeve.ca>
Please share your cluster.conf (only obfuscate passwords please) and the
output of 'clustat' from each node.
digimer
On 05/03/16 01:46 AM, Shreekant Jena wrote:
> Dear All,
>
> I have a 2 node cluster but after reboot secondary node is showing
> offline . And cman failed to start .
>
> Please find below logs on secondary node:-
>
> root at EI51SPM1 cluster]# clustat
> msg_open: Invalid argument
> Member Status: Inquorate
>
> Resource Group Manager not running; no service information available.
>
> Membership information not available
> [root at EI51SPM1 cluster]# tail -10 /var/log/messages
> Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
> Connection refused
> Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
> Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> connection.
> Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
> Connection refused
> Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
> Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> connection.
> Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> Connection refused
> Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> connection.
> Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> Connection refused
> Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
> [root at EI51SPM1 cluster]#
> [root at EI51SPM1 cluster]# cman_tool status
> Protocol version: 5.0.1
> Config version: 166
> Cluster name: IVRS_DB
> Cluster ID: 9982
> Cluster Member: No
> Membership state: Joining
> [root at EI51SPM1 cluster]# cman_tool nodes
> Node Votes Exp Sts Name
> [root at EI51SPM1 cluster]#
> [root at EI51SPM1 cluster]#
>
>
> Thanks & regards
> SHREEKANTA JENA
>
>
>
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
From shreekant.jena at gmail.com Sat Mar 5 09:47:26 2016
From: shreekant.jena at gmail.com (Shreekant Jena)
Date: Sat, 5 Mar 2016 15:17:26 +0530
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
In-Reply-To: <56DA811F.6090906@alteeve.ca>
References:
<56DA811F.6090906@alteeve.ca>
Message-ID:
secondary node
--------------------------------------
[root at Node2 ~]# cat /etc/cluster/cluster.conf
[root at Node2 ~]# clustat
msg_open: Invalid argument
Member Status: Inquorate
Resource Group Manager not running; no service information available.
Membership information not available
Primary Node
-----------------------------------------
[root at Node1 ~]# clustat
Member Status: Quorate
Member Name Status
------ ---- ------
Node1 Online, Local, rgmanager
Node2 Offline
Service Name Owner (Last) State
------- ---- ----- ------ -----
Package1 Node1 started
On Sat, Mar 5, 2016 at 12:17 PM, Digimer wrote:
> Please share your cluster.conf (only obfuscate passwords please) and the
> output of 'clustat' from each node.
>
> digimer
>
> On 05/03/16 01:46 AM, Shreekant Jena wrote:
> > Dear All,
> >
> > I have a 2 node cluster but after reboot secondary node is showing
> > offline . And cman failed to start .
> >
> > Please find below logs on secondary node:-
> >
> > root at EI51SPM1 cluster]# clustat
> > msg_open: Invalid argument
> > Member Status: Inquorate
> >
> > Resource Group Manager not running; no service information available.
> >
> > Membership information not available
> > [root at EI51SPM1 cluster]# tail -10 /var/log/messages
> > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> > connection.
> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> > connection.
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
> > connection.
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
> > [root at EI51SPM1 cluster]#
> > [root at EI51SPM1 cluster]# cman_tool status
> > Protocol version: 5.0.1
> > Config version: 166
> > Cluster name: IVRS_DB
> > Cluster ID: 9982
> > Cluster Member: No
> > Membership state: Joining
> > [root at EI51SPM1 cluster]# cman_tool nodes
> > Node Votes Exp Sts Name
> > [root at EI51SPM1 cluster]#
> > [root at EI51SPM1 cluster]#
> >
> >
> > Thanks & regards
> > SHREEKANTA JENA
> >
> >
> >
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From emi2fast at gmail.com Sat Mar 5 17:57:01 2016
From: emi2fast at gmail.com (emmanuel segura)
Date: Sat, 5 Mar 2016 18:57:01 +0100
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
In-Reply-To:
References:
<56DA811F.6090906@alteeve.ca>
Message-ID:
configure the fencing devices.
2016-03-05 10:47 GMT+01:00 Shreekant Jena :
> secondary node
>
> --------------------------------------
> [root at Node2 ~]# cat /etc/cluster/cluster.conf
>
>
> post_join_delay="3"/>
>
>
>
>
>
>
>
>
>
>
>
>
> restricted="1">
> priority="1"/>
> priority="1"/>
>
>
>
>
>
> name="PE51SPM1">
> force_fsck="1" force_unmount="1" fsid="3446" fstype="ext3"
> mountpoint="/SPIM/admin" name="admin" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="17646" fstype="ext3"
> mountpoint="/flatfile_upload" name="flatfile_upload" options=""
> self_fence="1"/>
> force_unmount="1" fsid="64480" fstype="ext3" mountpoint="/oracle"
> name="oracle" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="60560" fstype="ext3"
> mountpoint="/SPIM/datafile_01" name="datafile_01" options=""
> self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="48426" fstype="ext3"
> mountpoint="/SPIM/datafile_02" name="datafile_02" options=""
> self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="54326" fstype="ext3"
> mountpoint="/SPIM/redolog_01" name="redolog_01" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="23041" fstype="ext3"
> mountpoint="/SPIM/redolog_02" name="redolog_02" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="46362" fstype="ext3"
> mountpoint="/SPIM/redolog_03" name="redolog_03" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="58431" fstype="ext3"
> mountpoint="/SPIM/archives_01" name="archives_01" options=""
> self_fence="1"/>
>
>
>
>
>
>
>
> [root at Node2 ~]# clustat
> msg_open: Invalid argument
> Member Status: Inquorate
>
> Resource Group Manager not running; no service information available.
>
> Membership information not available
>
>
>
> Primary Node
>
> -----------------------------------------
> [root at Node1 ~]# clustat
> Member Status: Quorate
>
> Member Name Status
> ------ ---- ------
> Node1 Online, Local, rgmanager
> Node2 Offline
>
> Service Name Owner (Last) State
> ------- ---- ----- ------ -----
> Package1 Node1 started
>
>
> On Sat, Mar 5, 2016 at 12:17 PM, Digimer wrote:
>>
>> Please share your cluster.conf (only obfuscate passwords please) and the
>> output of 'clustat' from each node.
>>
>> digimer
>>
>> On 05/03/16 01:46 AM, Shreekant Jena wrote:
>> > Dear All,
>> >
>> > I have a 2 node cluster but after reboot secondary node is showing
>> > offline . And cman failed to start .
>> >
>> > Please find below logs on secondary node:-
>> >
>> > root at EI51SPM1 cluster]# clustat
>> > msg_open: Invalid argument
>> > Member Status: Inquorate
>> >
>> > Resource Group Manager not running; no service information available.
>> >
>> > Membership information not available
>> > [root at EI51SPM1 cluster]# tail -10 /var/log/messages
>> > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
>> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
>> > connection.
>> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
>> > connection.
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing
>> > connection.
>> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
>> > Connection refused
>> > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
>> > [root at EI51SPM1 cluster]#
>> > [root at EI51SPM1 cluster]# cman_tool status
>> > Protocol version: 5.0.1
>> > Config version: 166
>> > Cluster name: IVRS_DB
>> > Cluster ID: 9982
>> > Cluster Member: No
>> > Membership state: Joining
>> > [root at EI51SPM1 cluster]# cman_tool nodes
>> > Node Votes Exp Sts Name
>> > [root at EI51SPM1 cluster]#
>> > [root at EI51SPM1 cluster]#
>> >
>> >
>> > Thanks & regards
>> > SHREEKANTA JENA
>> >
>> >
>> >
>>
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
--
.~.
/V\
// \\
/( )\
^`~'^
From lists at alteeve.ca Sat Mar 5 18:18:50 2016
From: lists at alteeve.ca (Digimer)
Date: Sat, 5 Mar 2016 13:18:50 -0500
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
In-Reply-To:
References:
<56DA811F.6090906@alteeve.ca>
Message-ID: <56DB230A.60007@alteeve.ca>
Working fencing is required. The rgmanager component waits for a
successful fence message before beginning recovery (to prevent
split-brains).
On 05/03/16 04:47 AM, Shreekant Jena wrote:
> secondary node
>
> --------------------------------------
> [root at Node2 ~]# cat /etc/cluster/cluster.conf
>
>
> post_join_delay="3"/>
>
>
>
>
>
>
>
>
>
>
>
>
> restricted="1">
> priority="1"/>
> priority="1"/>
>
>
>
>
>
> name="PE51SPM1">
> force_fsck="1" force_unmount="1" fsid="3446" fstype="ext3"
> mountpoint="/SPIM/admin" name="admin" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="17646" fstype="ext3"
> mountpoint="/flatfile_upload" name="flatfile_upload" options=""
> self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="64480" fstype="ext3"
> mountpoint="/oracle" name="oracle" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="60560" fstype="ext3"
> mountpoint="/SPIM/datafile_01" name="datafile_01" options=""
> self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="48426" fstype="ext3"
> mountpoint="/SPIM/datafile_02" name="datafile_02" options=""
> self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="54326" fstype="ext3"
> mountpoint="/SPIM/redolog_01" name="redolog_01" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="23041" fstype="ext3"
> mountpoint="/SPIM/redolog_02" name="redolog_02" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="46362" fstype="ext3"
> mountpoint="/SPIM/redolog_03" name="redolog_03" options="" self_fence="1"/>
> force_fsck="1" force_unmount="1" fsid="58431" fstype="ext3"
> mountpoint="/SPIM/archives_01" name="archives_01" options=""
> self_fence="1"/>
>
>
>
>
>
>
>
> [root at Node2 ~]# clustat
> msg_open: Invalid argument
> Member Status: Inquorate
>
> Resource Group Manager not running; no service information available.
>
> Membership information not available
>
>
>
> Primary Node
>
> -----------------------------------------
> [root at Node1 ~]# clustat
> Member Status: Quorate
>
> Member Name Status
> ------ ---- ------
> Node1 Online, Local, rgmanager
> Node2 Offline
>
> Service Name Owner (Last) State
> ------- ---- ----- ------ -----
> Package1 Node1 started
>
>
> On Sat, Mar 5, 2016 at 12:17 PM, Digimer > wrote:
>
> Please share your cluster.conf (only obfuscate passwords please) and the
> output of 'clustat' from each node.
>
> digimer
>
> On 05/03/16 01:46 AM, Shreekant Jena wrote:
> > Dear All,
> >
> > I have a 2 node cluster but after reboot secondary node is showing
> > offline . And cman failed to start .
> >
> > Please find below logs on secondary node:-
> >
> > root at EI51SPM1 cluster]# clustat
> > msg_open: Invalid argument
> > Member Status: Inquorate
> >
> > Resource Group Manager not running; no service information available.
> >
> > Membership information not available
> > [root at EI51SPM1 cluster]# tail -10 /var/log/messages
> > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate.
> Refusing
> > connection.
> > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.
> Refusing
> > connection.
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.
> Refusing
> > connection.
> > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect:
> > Connection refused
> > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
> > [root at EI51SPM1 cluster]#
> > [root at EI51SPM1 cluster]# cman_tool status
> > Protocol version: 5.0.1
> > Config version: 166
> > Cluster name: IVRS_DB
> > Cluster ID: 9982
> > Cluster Member: No
> > Membership state: Joining
> > [root at EI51SPM1 cluster]# cman_tool nodes
> > Node Votes Exp Sts Name
> > [root at EI51SPM1 cluster]#
> > [root at EI51SPM1 cluster]#
> >
> >
> > Thanks & regards
> > SHREEKANTA JENA
> >
> >
> >
>
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
>
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
From shreekant.jena at gmail.com Mon Mar 7 07:00:04 2016
From: shreekant.jena at gmail.com (Shreekant Jena)
Date: Mon, 7 Mar 2016 12:30:04 +0530
Subject: [Linux-cluster] CMAN Failed to start on Secondary Node
In-Reply-To: <56DB230A.60007@alteeve.ca>
References:
<56DA811F.6090906@alteeve.ca>
<56DB230A.60007@alteeve.ca>
Message-ID:
Thank you for the reply...
but i am new to cluster configuration but both node were running fine
before reboot.
Can u guide how to configure a fence device in this server. will be highly
appreciated .
Thanks,
Shreekanta Jena
On Sat, Mar 5, 2016 at 11:48 PM, Digimer wrote:
> Working fencing is required. The rgmanager component waits for a
> successful fence message before beginning recovery (to prevent
> split-brains).
>
> On 05/03/16 04:47 AM, Shreekant Jena wrote:
> > secondary node
> >
> > --------------------------------------
> > [root at Node2 ~]# cat /etc/cluster/cluster.conf
> >
> >
> > > post_join_delay="3"/>
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > > restricted="1">
> > > priority="1"/>
> > > priority="1"/>
> >
> >
> >
> >
> >
> > > name="PE51SPM1">
> > > force_fsck="1" force_unmount="1" fsid="3446" fstype="ext3"
> > mountpoint="/SPIM/admin" name="admin" options="" self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="17646" fstype="ext3"
> > mountpoint="/flatfile_upload" name="flatfile_upload" options=""
> > self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="64480" fstype="ext3"
> > mountpoint="/oracle" name="oracle" options="" self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="60560" fstype="ext3"
> > mountpoint="/SPIM/datafile_01" name="datafile_01" options=""
> > self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="48426" fstype="ext3"
> > mountpoint="/SPIM/datafile_02" name="datafile_02" options=""
> > self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="54326" fstype="ext3"
> > mountpoint="/SPIM/redolog_01" name="redolog_01" options=""
> self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="23041" fstype="ext3"
> > mountpoint="/SPIM/redolog_02" name="redolog_02" options=""
> self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="46362" fstype="ext3"
> > mountpoint="/SPIM/redolog_03" name="redolog_03" options=""
> self_fence="1"/>
> > > force_fsck="1" force_unmount="1" fsid="58431" fstype="ext3"
> > mountpoint="/SPIM/archives_01" name="archives_01" options=""
> > self_fence="1"/>
> >