[Linux-cluster] two node cluster not coming up

Robert Peterson rpeterso at redhat.com
Thu Jun 29 14:36:28 UTC 2006


RR wrote:
> Hi Bob,
>
> Attached is the cluster.conf file, and below is the status I get from the
> command "service cman status" on the working node:
>
> Svr00# service cman status
> Protocol version: 5.0.1
> Config version: 4
> Cluster name: testcluster
> Cluster ID: 27453
> Cluster Member: Yes
> Membership state: Cluster-Member
> Nodes: 1
> Expected_votes: 1
> Total_votes: 1
> Quorum: 1
> Active subsystems: 0
> Node name: svr00
> Node addresses: 10.1.3.64
>
> svr00# clustat
> Member Status: Quorate
>
> Resource Group Manager not running; no service information available.
>
>   Member Name                              Status
>   ------ ----                              ------
>   svr00                                     Online, Local
>   svr01                                     Offline
>
>
> I rebooted svr01 and now it just sits there at Starting clvmd: during
> bootup. 
>
> Hope this helps in anyone understanding my issue? Do I need all the other
> services configured for this to work properly? i.e. clvmd, fenced, etc. etc.
> I just wanted to see two nodes in a cluster first before I configured any
> resources, services, fencing etc etc. 
>
> \R 
>   
Hi RR,

Hm.  I didn't see anything obviously wrong with your cluster.conf file.
I guess I'd reboot svr01 and try to bring it into the cluster manually, and
see if it complains about anything along the way. 
(You may need to bring it up in single-user mode so that it doesn't hang
at the service script that starts clvmd)
Something like this:

modprobe lock_dlm
modprobe gfs
ccsd
cman_tool join -w
fence_tool join -w
clvmd

I'd verify that your communications are sound, that you can ping svr00 from
svr01, and that multicast is working.  Any reason you went with multicast
rather than broadcast?  You could see if a broadcast ping (ping -b) would
work from svr01 to svr00.  Also, you could test to see if your firewall
is blocking the IO by temporarily doing "service iptables stop" on both 
nodes.
I'd hope that selinux isn't interfering either, but you could try doing
"setenforce 0" just as an experiment to make sure.  These are just some 
ideas.

Regards,

Bob Peterson
Red Hat Cluster Suite




More information about the Linux-cluster mailing list