[Linux-cluster] unfreeze a node of the cluster and cause reboot remaining nodes

KC LO kclo2000 at gmail.com
Tue Jan 11 09:19:17 UTC 2011


Dear all,

We have set up a 3 + 1 cluster which is 3 active node and 1 standby nodes
and quorum disks.

clustat
Member Status: Quorate
 Member Name                             ID   Status
 ------ ----                             ---- ------
 servera                                1 Online, rgmanager
 serverb                                2 Online, rgmanager
 serverc                                3 Online, rgmanager
 standby                               4 Online, Local, rgmanager
 /dev/emcpowers                    0 Online, Quorum Disk

 Service Name                 Owner (Last)                   State
 service:servicea              servera                   started
 service:serviceb              serverb                   started
 service:servicec              serverc                   started

Any server failure and cause server relocate to the standby server and
basically all cluster functions properly.

However, when I type clusvcadm -Z servera, it can sucessfully freeze the
nodes.  However, if I type clusvcadm -U servera to unfreeze the node, it
will check the status of the running application under cluster monitoring.
But don't know why it return status failed while the application is running
properly.  It will then try to stop the application and reported that it
failed to unmount the partition and cause servera rebooted.  During servera
reboot, servicea can not failover to standby node and the service state
shows "recoverable".  After servera rebooted successfully, servicea can run
on servera but then serverb and serverc reboot togeter.

Do you have any idea?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110111/f83b5370/attachment.htm>


More information about the Linux-cluster mailing list