[Linux-cluster] permanently removing node from running cluster

Martin Waite Martin.Waite at datacash.com
Mon Jun 21 09:47:43 UTC 2010



> -----Original Message-----
> From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com]
> On Behalf Of Christine Caulfield
> Sent: 21 June 2010 10:32
> To: linux clustering
> Subject: Re: [Linux-cluster] permanently removing node from running
cluster
> 
> On 21/06/10 10:14, Martin Waite wrote:
> > Hi,
> >
> > Is it possible to permanently remove a node from a running cluster ?
> >
> > All my attempts result in the node being in the state "offline,
> > estranged", and the node still counting as a member in the "Nodes: "
> > count from cman_tool status ( but not in the "Expected votes:" count
-
> > so I think the quorum size is correct).
> >
> > It appears that the only way to permanently remove references to a
node
> > is to restart cman on the surviving nodes.
> >
> > My procedure for removing the node is:
> >
> > 1. relocate any services running on the node
> >
> > 2. edit cluster.conf to remove the node from clusternodes
> >
> > 3. push the config to the cluster with ccs_tool
> >
> > 4. stop rgmanager on the node to be removed
> >
> > 5. stop cman on the node to be removed.
> >
> > At this point, clustat on a surviving node shows:
> >
> > Cluster Status for EDISV1DBM @ Mon Jun 21 09:46:45 2010
> >
> > Member Status: Quorate
> >
> > Member Name ID Status
> >
> > ------ ---- ---- ------
> >
> > svXprdclu002 2 Online, Local, rgmanager
> >
> > svXprdclu003 3 Online, rgmanager
> >
> > svXprdclu004 4 Online, rgmanager
> >
> > svXprdclu005 5 Online, rgmanager
> >
> > svXprdclu001 1 Offline, Estranged
> >
> > Service Name Owner (Last) State
> >
> > ------- ---- ----- ------ -----
> >
> > service:ACTIVESITE svXprdclu002 started
> >
> > service:MASTERVIP svXprdclu002 started
> >
> > The removed node (svXprdclu001) is still known to the cluster, but
is
> > now "estranged".
> >
> > The node has been removed from the "Expected votes" count, but not
the
> > "Nodes" count:
> >
> > sudo /usr/sbin/cman_tool status
> >
> > Version: 6.2.0
> >
> > Config Version: 19
> >
> > Cluster Name: EDISV1DBM
> >
> > Cluster Id: 35945
> >
> > Cluster Member: Yes
> >
> > Cluster Generation: 1008
> >
> > Membership state: Cluster-Member
> >
> > Nodes: 5
> >
> > Expected votes: 4
> >
> > Total votes: 4
> >
> > Quorum: 3
> >
> > Active subsystems: 8
> >
> > Flags: Dirty
> >
> > Ports Bound: 0 177
> >
> > Node name: svXprdclu004
> >
> > Node ID: 4
> >
> > Multicast addresses: 239.192.0.1
> >
> > Node addresses: 10.3.18.24
> >
> > If I then choose a node (not running the services) and restart cman,
> > this node no longer _/sees/_ the removed node:
> >
> > [martin at cp1edidbm003 ~]$ sudo /usr/sbin/clustat
> >
> > Cluster Status for EDISV1DBM @ Mon Jun 21 09:53:34 2010
> >
> > Member Status: Quorate
> >
> > Member Name ID Status
> >
> > ------ ---- ---- ------
> >
> > svXprdclu002 2 Online
> >
> > svXprdclu003 3 Online, Local
> >
> > svXprdclu004 4 Online
> >
> > svXprdclu005 5 Online
> >
> > [martin at cp1edidbm003 ~]$ sudo /usr/sbin/cman_tool status
> >
> > Version: 6.2.0
> >
> > Config Version: 19
> >
> > Cluster Name: EDISV1DBM
> >
> > Cluster Id: 35945
> >
> > Cluster Member: Yes
> >
> > Cluster Generation: 1008
> >
> > Membership state: Cluster-Member
> >
> > Nodes: 4
> >
> > Expected votes: 4
> >
> > Total votes: 4
> >
> > Quorum: 3
> >
> > Active subsystems: 7
> >
> > Flags: Dirty
> >
> > Ports Bound: 0
> >
> > Node name: svXprdclu003
> >
> > Node ID: 3
> >
> > Multicast addresses: 239.192.0.1
> >
> > Node addresses: 10.3.18.23
> >
> > However, I would prefer not to relocate my services in order to
restart
> > cman on every node.
> >
> 
> 
> You don't say what version of clustering you are using. In cluster3
> nodes can be removed permanently from the internal cluster lists by
> removing it from cluster.conf and reloading it. In versions Before
that
> they hang around until the whole cluster is rebooted.
> 
> It's just a name in a list and the inconvenience should be purely
> cosmetic. A node that is not in the cluster has no effect on any
cluster
> operations.
> 
> Chrissie
> 

Hi Chrissie, 

I am using whatever version of cluster comes with RHEL 5.4 - this is
likely to be cluster2 given its behaviour.

I was hoping that the estranged node was just a cosmetic nuisance - so
thanks for the confirmation.

regards,
Martin






More information about the Linux-cluster mailing list