[Linux-cluster] CMAN Failed to start on Secondary Node
Shreekant Jena
shreekant.jena at gmail.com
Mon Mar 7 09:19:20 UTC 2016
Thanks for your support ,
I have done it, its working fine now.
thank you so much .
Thanks & Regards,
Shreekanta Jena
On Mon, Mar 7, 2016 at 12:30 PM, Shreekant Jena <shreekant.jena at gmail.com>
wrote:
> Thank you for the reply...
> but i am new to cluster configuration but both node were running fine
> before reboot.
>
> Can u guide how to configure a fence device in this server. will be highly
> appreciated .
>
> Thanks,
> Shreekanta Jena
>
> On Sat, Mar 5, 2016 at 11:48 PM, Digimer <lists at alteeve.ca> wrote:
>
>> Working fencing is required. The rgmanager component waits for a
>> successful fence message before beginning recovery (to prevent
>> split-brains).
>>
>> On 05/03/16 04:47 AM, Shreekant Jena wrote:
>> > secondary node
>> >
>> > --------------------------------------
>> > [root at Node2 ~]# cat /etc/cluster/cluster.conf
>> > <?xml version="1.0"?>
>> > <cluster alias="IVRS_DB" config_version="166" name="IVRS_DB">
>> > <fence_daemon clean_start="0" post_fail_delay="0"
>> > post_join_delay="3"/>
>> > <clusternodes>
>> > <clusternode name="Node1" nodeid="1" votes="1">
>> > <fence/>
>> > </clusternode>
>> > <clusternode name="Node2" nodeid="2" votes="1">
>> > <fence/>
>> > </clusternode>
>> > </clusternodes>
>> > <cman expected_votes="1" two_node="1"/>
>> > <fencedevices/>
>> > <rm>
>> > <failoverdomains>
>> > <failoverdomain name="Package1" ordered="1"
>> > restricted="1">
>> > <failoverdomainnode name="Node1"
>> > priority="1"/>
>> > <failoverdomainnode name="Node2"
>> > priority="1"/>
>> > </failoverdomain>
>> > </failoverdomains>
>> > <resources>
>> > <ip address="10.199.214.64" monitor_link="1"/>
>> > </resources>
>> > <service autostart="1" domain="PE51SPM1" exclusive="1"
>> > name="PE51SPM1">
>> > <fs device="/dev/EI51SPM_DATA/SPIM_admin"
>> > force_fsck="1" force_unmount="1" fsid="3446" fstype="ext3"
>> > mountpoint="/SPIM/admin" name="admin" options="" self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/flatfile_upload"
>> > force_fsck="1" force_unmount="1" fsid="17646" fstype="ext3"
>> > mountpoint="/flatfile_upload" name="flatfile_upload" options=""
>> > self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/oracle"
>> > force_fsck="1" force_unmount="1" fsid="64480" fstype="ext3"
>> > mountpoint="/oracle" name="oracle" options="" self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_datafile_01"
>> > force_fsck="1" force_unmount="1" fsid="60560" fstype="ext3"
>> > mountpoint="/SPIM/datafile_01" name="datafile_01" options=""
>> > self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_datafile_02"
>> > force_fsck="1" force_unmount="1" fsid="48426" fstype="ext3"
>> > mountpoint="/SPIM/datafile_02" name="datafile_02" options=""
>> > self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_redolog_01"
>> > force_fsck="1" force_unmount="1" fsid="54326" fstype="ext3"
>> > mountpoint="/SPIM/redolog_01" name="redolog_01" options=""
>> self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_redolog_02"
>> > force_fsck="1" force_unmount="1" fsid="23041" fstype="ext3"
>> > mountpoint="/SPIM/redolog_02" name="redolog_02" options=""
>> self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_redolog_03"
>> > force_fsck="1" force_unmount="1" fsid="46362" fstype="ext3"
>> > mountpoint="/SPIM/redolog_03" name="redolog_03" options=""
>> self_fence="1"/>
>> > <fs device="/dev/EI51SPM_DATA/SPIM_archives_01"
>> > force_fsck="1" force_unmount="1" fsid="58431" fstype="ext3"
>> > mountpoint="/SPIM/archives_01" name="archives_01" options=""
>> > self_fence="1"/>
>> > <script file="/etc/cluster/dbstart"
>> name="dbstart"/>
>> > <ip ref="10.199.214.64"/>
>> > </service>
>> > </rm>
>> > </cluster>
>> >
>> >
>> > [root at Node2 ~]# clustat
>> > msg_open: Invalid argument
>> > Member Status: Inquorate
>> >
>> > Resource Group Manager not running; no service information available.
>> >
>> > Membership information not available
>> >
>> >
>> >
>> > Primary Node
>> >
>> > -----------------------------------------
>> > [root at Node1 ~]# clustat
>> > Member Status: Quorate
>> >
>> > Member Name Status
>> > ------ ---- ------
>> > Node1 Online, Local, rgmanager
>> > Node2 Offline
>> >
>> > Service Name Owner (Last) State
>> > ------- ---- ----- ------ -----
>> > Package1 Node1 started
>> >
>> >
>> > On Sat, Mar 5, 2016 at 12:17 PM, Digimer <lists at alteeve.ca
>> > <mailto:lists at alteeve.ca>> wrote:
>> >
>> > Please share your cluster.conf (only obfuscate passwords please)
>> and the
>> > output of 'clustat' from each node.
>> >
>> > digimer
>> >
>> > On 05/03/16 01:46 AM, Shreekant Jena wrote:
>> > > Dear All,
>> > >
>> > > I have a 2 node cluster but after reboot secondary node is showing
>> > > offline . And cman failed to start .
>> > >
>> > > Please find below logs on secondary node:-
>> > >
>> > > root at EI51SPM1 cluster]# clustat
>> > > msg_open: Invalid argument
>> > > Member Status: Inquorate
>> > >
>> > > Resource Group Manager not running; no service information
>> available.
>> > >
>> > > Membership information not available
>> > > [root at EI51SPM1 cluster]# tail -10 /var/log/messages
>> > > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing
>> connect:
>> > > Connection refused
>> > > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request
>> > > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate.
>> > Refusing
>> > > connection.
>> > > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing
>> connect:
>> > > Connection refused
>> > > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request
>> > > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.
>> > Refusing
>> > > connection.
>> > > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing
>> connect:
>> > > Connection refused
>> > > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate.
>> > Refusing
>> > > connection.
>> > > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing
>> connect:
>> > > Connection refused
>> > > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request
>> > > [root at EI51SPM1 cluster]#
>> > > [root at EI51SPM1 cluster]# cman_tool status
>> > > Protocol version: 5.0.1
>> > > Config version: 166
>> > > Cluster name: IVRS_DB
>> > > Cluster ID: 9982
>> > > Cluster Member: No
>> > > Membership state: Joining
>> > > [root at EI51SPM1 cluster]# cman_tool nodes
>> > > Node Votes Exp Sts Name
>> > > [root at EI51SPM1 cluster]#
>> > > [root at EI51SPM1 cluster]#
>> > >
>> > >
>> > > Thanks & regards
>> > > SHREEKANTA JENA
>> > >
>> > >
>> > >
>> >
>> >
>> > --
>> > Digimer
>> > Papers and Projects: https://alteeve.ca/w/
>> > What if the cure for cancer is trapped in the mind of a person
>> without
>> > access to education?
>> >
>> > --
>> > Linux-cluster mailing list
>> > Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
>> > https://www.redhat.com/mailman/listinfo/linux-cluster
>> >
>> >
>> >
>> >
>>
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20160307/c0ca2874/attachment.htm>
More information about the Linux-cluster
mailing list