[Linux-cluster] gnbd fencing

Alex linux at vfemail.net
Tue Aug 19 14:23:49 UTC 2008


I have a cluster with 5 nodes:
- 3 gnbd clients and 
- 2 gnbd servers

our gnbd clients have the following IPs:
192.168.113.{3,4,5}
our gnbd servers have the following IPs:
192.168.113.{6,7}

Our gnbd servers 192.168.113.{6,7} are exporting different block devices, 
let's say: shd1_hdb1 from server1 and shd2_hdc1 from server2

All our clients 192.168.113.{3,4,5} are importing booth devices:
[root at rs1 ~]# ls /dev/gnbd
shd1_hdb1  shd2_hdc1
[root at rs2 ~]# ls /dev/gnbd
shd1_hdb1  shd2_hdc1
[root at rs3 ~]# ls /dev/gnbd
shd1_hdb1  shd2_hdc1

On our client nodes /dev/gnbd/shd1_hdb1 and /dev/gnbd/shd2_hdc1 are forming a 
LVM mirrorored volume which on top is using gfs.

Question1: in cluster.conf, for our gnbd servers is correct to use the same 
fencing method like the one used for our client nodes or should be used 
manual fencing?

For example, for node 4 and 5 is correct to have:

<clusternode name="192.168.113.6" nodeid="4" votes="1">
<fence>
 <method name="first_fence_method_for_this_node">
  <device name="gnbd_from_shds" nodename="192.168.113.6"/>
 </method>
</fence>
</clusternode>

<clusternode name="192.168.113.7" nodeid="5" votes="1">
<fence>
 <method name="first_fence_method_for_this_node">
  <device name="gnbd_from_shds" nodename="192.168.113.7"/>
 </method>
</fence>
</clusternode>

Question2: Our gfs logical volume should be defined as quorum disk or is 
enough to increase votes on node 4 and node 5 (our gnbd servers) from 
votes="1" to votes="2"?

Here comes my current cluster.conf:

<?xml version="1.0"?>
<cluster alias="httpcluster" config_version="32" name="httpcluster">
        <fence_daemon clean_start="0" post_fail_delay="0" 
post_join_delay="3"/>
        <clusternodes>
                <clusternode name="192.168.113.5" nodeid="1" votes="1">
                        <fence>
                                <method 
name="first_fence_method_for_this_node">
                                        <device name="gnbd_from_shds" 
nodename="192.168.113.5"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="192.168.113.4" nodeid="2" votes="1">
                        <fence>
                                <method 
name="first_fence_method_for_this_node">
                                        <device name="gnbd_from_shds" 
nodename="192.168.113.4"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="192.168.113.3" nodeid="3" votes="1">
                        <fence>
                                <method 
name="first_fence_method_for_this_node">
                                        <device name="gnbd_from_shds" 
nodename="192.168.113.3"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="192.168.113.6" nodeid="4" votes="1">
                        <fence/>
                </clusternode>
                <clusternode name="192.168.113.7" nodeid="5" votes="1">
                        <fence/>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_gnbd" name="gnbd_from_shds" 
servers="192.168.113.6 192.168.113.7"/>
                <fencedevice agent="fence_manual" name="manual_f"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources/>
        </rm>
</cluster>

Regards,
Alx




More information about the Linux-cluster mailing list