[Linux-cluster] Clumanager and Chkconfig

Kovacs, Corey J. cjk at techma.com
Tue Apr 18 13:28:24 UTC 2006


It's basically a policy issue on your part. Some folks like to have problem
nodes
boot up "dumb" to avoid the system taking a beating due to a major problem.
It's
possible that the cluster would ride this sort of thing out, but if you have
a node
go down, you'd be investigating anyway so booting "dumb" is not a bad idea
anyway.


Corey 

-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Steve Nelson
Sent: Tuesday, April 18, 2006 9:11 AM
To: linux clustering
Subject: [Linux-cluster] Clumanager and Chkconfig

Hi All,

Should clumanager be set to automatically start on all nodes?  I have a 2
node cluster (+ quorum) were if I kill an interface, the cluster fails over
and the failed node reboots.  However, the node rejoins the cluster
automatically - should this happen?

# chkconfig --list clumanager
clumanager      0:off   1:off   2:on    3:on    4:on    5:on    6:off

This is in chkconfig because I ran chkconfig --add clumanager.

On another cluster, I have not run this, but this is currently in production
so I can't test failover.

My feeling was that Oracle should transfer to the other node, and clustat
should shown one node is inactive, and should be started manually.

Does this seem right?

S.

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list