[Linux-cluster] RedHat Cluster with DRBD and GFS2

Joe Hammerman jhammerman at saymedia.com
Fri Nov 19 22:13:55 UTC 2010


Good afternoon all. I would imagine this is a topic which comes up regularly, but I have researched this topic fairly extenisively on the internet, and I have been unable to find anything which actually describes how to configure a cluster in the manner we are attempting. Any ideas, or advice would be deeply appreciated!

We have two VMWare virtual machines which are members of a two node cluster, sitting behind a load balanced VIP. When DRBD breaks, GFS locks up, because it is worried about corrupting the data on the shared device. This is good, but the nodes do not fence each other.

It was my guess that this was because, although fencing is configured, DRBD is not defined as a resource (is this accurate?). I have edited my cluster.conf file thusly, and updated the clusters configuration.

<?xml version="1.0"?>
<cluster alias="studio2.sacpa" config_version="8" name="studio2.sacpa">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="studio104.sacpa.videoegg.com" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_node2"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="studio103.sacpa.videoegg.com" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="fence_node1"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_vmware" ipaddr="10.1.69.106:8333" login="xxx" name="fence_node2" passwd_script="xxx" port="[standard] studio104.sacpa/studio104.sacpa.vmx" vmware_type="server2"/>
                <fencedevice agent="fence_vmware" ipaddr="10.1.69.105:8333" login="xxx" name="fence_node1" passwd_script="xxx" port="[standard] studio103.sacpa/studio103.sacpa.vmx" vmware_type="server2"/>
        </fencedevices>
        <cman expected_votes="1" two_node="1"/>
        rm>
                <resources>
                </resources>
                <failoverdomains/>
                  <service autostart="1" name="httpd_drive">
                        <drbd name="drbd-httpd" resource="httpd">
                                <fs device="/dev/studio-vg/studio-lv" mountpoint="/export/www/html" fstype="gfs2" name="httpd_drive" options="noatime,nodiratime,data=writeback"/>
                        </drbd>
                </service>
        </rm>
</cluster>

I have also tried this with <fs device> directed at the underlying DRBD device. However, the drive will not mount, and the service will not start. Can anyone shed any light on this issue? Is Pacemaker a better bet for DRBD Primary / Primary configurations?

Thanks!
------ End of Forwarded Message
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20101119/f7a8689e/attachment.htm>


More information about the Linux-cluster mailing list