[Linux-cluster] GFS as a Resource

Chris Edwards cedwards at smartechcorp.net
Mon Aug 18 17:36:41 UTC 2008

Here is my clustat....

Cluster Status for Xen @ Mon Aug 18 13:28:37 2008
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 xen1.smartechcorp.net                        1 Online, Local, rgmanager
 xen2.smartechcorp.net                        2 Online, rgmanager

 Service Name                    Owner (Last)                    State

 ------- ----                    ----- ------                    -----

 service:GFS Mount Xen1          xen1.smartechcorp.net           started

 service:GFS Mount Xen2          xen2.smartechcorp.net           started

Here is my cluster.conf...

<?xml version="1.0"?>
<cluster alias="Xen" config_version="53" name="Xen">
        <fence_daemon clean_start="0" post_fail_delay="0"
                <clusternode name="xen1.smartechcorp.net" nodeid="1"
                                <method name="1">
                                        <device name="manual"
                <clusternode name="xen2.smartechcorp.net" nodeid="2"
                                <method name="1">
                                        <device name="manual"
        <cman expected_votes="1" two_node="1"/>
                <fencedevice agent="fence_manual" name="manual"/>
                        <failoverdomain name="bias-xen1" nofailback="0"
ordered="1" restricted="0">

name="xen1.smartechcorp.net" priority="1"/>

name="xen2.smartechcorp.net" priority="2"/>
                        <failoverdomain name="bias-xen2" nofailback="0"
ordered="1" restricted="0">
name="xen1.smartechcorp.net" priority="2"/>
name="xen2.smartechcorp.net" priority="1"/>
                        <failoverdomain name="gfs-xen1" nofailback="0"
ordered="0" restricted="1">
name="xen1.smartechcorp.net" priority="1"/>
                        <failoverdomain name="gfs-xen2" nofailback="0"
ordered="0" restricted="1">
name="xen2.smartechcorp.net" priority="1"/>
                <service autostart="1" domain="gfs-xen2" exclusive="0"
name="GFS Mount Xen2" recovery="restart"/>
                <service autostart="1" domain="gfs-xen1" exclusive="0"
name="GFS Mount Xen1" recovery="restart"/>
        <quorumd device="/dev/sdb5" interval="1" min_score="1" tko="10"
                <heuristic interval="2" program="ping -c3 -t2"

Without a entry in fstab my gfs file systems never mount.   So I am
wondering how I can leave out entries in my fstab.


Chris Edwards
Smartech Corp.
Div. of AirNet Group
cedwards at smartechcorp.net
P:  423-664-7678 x114
C:  423-593-6964
F:  423-664-7680

-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Maurizio Rottin
Sent: Friday, August 15, 2008 2:06 PM
To: linux clustering
Subject: Re: [Linux-cluster] GFS as a Resource

2008/8/15 Chris Edwards <cedwards at smartechcorp.net>:
> Whoops, scratch that last post.   I now have it working by leaving the
> in fstab without the noauto and turning GFS off with chkconfig and
> the cluster service to turn it on.
> Thanks again!

i believe thats the wrong way.
I know it works in that way, but:
- if you have only one node, do not use gfs, it's slow!
- if you have more than one node, use it -- and if you can, test gfs2
as weel (it should be more and more fast) -- but do not mount it (only
- i mean, you don't need it to be listed on a fstab) in fstab.
gfs works if only all the nodes are "up and running", which means, if
one node can't be reached, but is up (network or other problems
inolved) no one will use the gfs filesystem.
You must use it as a resorce, and you must have at least one fencing
method for each node in the cluster.
In this way, once a node becomes unreachable, it will be fenced and
the other nodes can write happily on the filesystem. This is because
if one node "can be considered up and maybe running" it may be writing
on the filesystem, or it can maybe think that it is the only one node
in the cluster (think ebout switch problem, or arp spoofing) than if
you try a "clust" command on that node you will see al  the other
nodes down and only that one up....this is why you must have  a
fencing method! that node HAS TO be shut down or reloaded, otherwise
the filesystem will be blocked, and no read o write can be issued by
any of the nodes in the cluster".

I am not talking about what it is in theory(never attended a RH
session), but believe me, in practice it works like that!

create a global resource (and always create a global resource even if
it is a fencing, or a vsftpd resource that every node has in common)
aqnd mount it in every node you need as a service. Do not think an
fstab entry is the better thing you can have, it is not, it can lock
you filesystem till all the nodes are really working and talking one
each other.


Linux-cluster mailing list
Linux-cluster at redhat.com

More information about the Linux-cluster mailing list