[Linux-cluster] Problems configuring fence devices

carlopmart carlopmart at gmail.com
Wed Dec 17 12:05:42 UTC 2008


Hi all,

  I have setup three nodes with redhat cluster suite using rhel5.2. One of this 
three nodes acts as a iscsi server an provides fence_gnbd to the other ones. In 
this node I have configured fence_manual because is the first node to start-up. 
But every time returns this errors until node1 or node2 or both starts ...:


Dec 17 12:42:21 node3 dlm_controld[2568]: connect to ccs error -111, check ccsd 
or cluster status
Dec 17 12:42:21 node3 ccsd[2538]: Cluster is not quorate.  Refusing connection.
Dec 17 12:42:21 node3 ccsd[2538]: Error while processing connect: Connection refused
Dec 17 12:42:21 node3 ccsd[2538]: Cluster is not quorate.  Refusing connection.
Dec 17 12:42:21 node3 ccsd[2538]: Error while processing connect: Connection 
refused
Dec 17 12:42:22 node3 ccsd[2538]: Cluster is not quorate.  Refusing connection.


  My cluster.conf is:

<?xml version="1.0"?>
<cluster alias="RhelHomeCluster" config_version="4" name="RhelHomeCluster">
         <fence_daemon post_fail_delay="0" post_join_delay="3"/>
         <clusternodes>
                 <clusternode name="node1" nodeid="2" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="gnbd-fence01" 
nodename="node1"/>
                                 </method>
                         </fence>
                         <multicast addr="239.192.75.55" interface="eth0"/>
                 </clusternode>
                 <clusternode name="node2" nodeid="3" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="gnbd-fence01" 
nodename="node2"/>
                                 </method>
                         </fence>
                         <multicast addr="239.192.75.55" interface="eth0"/>
                 </clusternode>
                 <clusternode name="node3" nodeid="1" votes="1">
                         <fence>
                                 <method name="1">
                                         <device name="last_resort" 
nodename="node3"/>
                                 </method>
                         </fence>
                         <multicast addr="239.192.75.55" interface="eth1"/>
                 </clusternode>
         </clusternodes>
         <cman>
                 <multicast addr="239.192.75.55"/>
         </cman>
         <fencedevices>
                 <fencedevice agent="fence_gnbd" name="gnbd-fence01" 
servers="node3"/>
                 <fencedevice agent="fence_manual" name="last_resort"/>
         </fencedevices>
         <rm log_facility="local4" log_level="7">
                 <failoverdomains>
                         <failoverdomain name="PriCluster" ordered="1" 
restricted="1">
                                 <failoverdomainnode name="node1" priority="1"/>
                                 <failoverdomainnode name="node2" priority="2"/>
                         </failoverdomain>
                         <failoverdomain name="SecCluster" ordered="1" 
restricted="1">
                                 <failoverdomainnode name="node2" priority="1"/>
                                 <failoverdomainnode name="node1" priority="2"/>
                         </failoverdomain>
                 </failoverdomains>
                 <resources>
                         <ip address="172.25.50.18" monitor_link="1"/>
                 </resources>
                 <service autostart="1" domain="SecCluster" name="proxy-svc" 
recovery="relocate">
                         <ip ref="172.25.50.18">
                                 <script file="/data/configs/etc/init.d/squid" 
name="squid"/>
                         </ip>
                 </service>
         </rm>
</cluster>
-- 
CL Martinez
carlopmart {at} gmail {d0t} com




More information about the Linux-cluster mailing list