<div>Dear all,</div>
<div> </div>
<div>We have set up a 3 + 1 cluster which is 3 active node and 1 standby nodes and quorum disks.</div>
<div> </div>
<div>clustat<br></div>
<div>Member Status: Quorate</div>
<div> Member Name ID Status<br> ------ ---- ---- ------<br> servera 1 Online, rgmanager<br> serverb 2 Online, rgmanager<br>
serverc 3 Online, rgmanager<br> standby 4 Online, Local, rgmanager<br> /dev/emcpowers 0 Online, Quorum Disk</div>
<div> </div>
<div> Service Name Owner (Last) State<br> service:servicea servera started<br> service:serviceb serverb started<br> service:servicec serverc started<br>
</div>
<div> </div>
<div>Any server failure and cause server relocate to the standby server and basically all cluster functions properly.</div>
<div> </div>
<div>However, when I type clusvcadm -Z servera, it can sucessfully freeze the nodes. However, if I type clusvcadm -U servera to unfreeze the node, it will check the status of the running application under cluster monitoring. But don't know why it return status failed while the application is running properly. It will then try to stop the application and reported that it failed to unmount the partition and cause servera rebooted. During servera reboot, servicea can not failover to standby node and the service state shows "recoverable". After servera rebooted successfully, servicea can run on servera but then serverb and serverc reboot togeter.</div>
<div> </div>
<div>Do you have any idea?</div>