[Linux-cluster] GFS as a Resource

Maurizio Rottin maurizio.rottin at gmail.com
Fri Aug 15 18:05:48 UTC 2008

2008/8/15 Chris Edwards <cedwards at smartechcorp.net>:
> Whoops, scratch that last post.   I now have it working by leaving the entry
> in fstab without the noauto and turning GFS off with chkconfig and allowing
> the cluster service to turn it on.
> Thanks again!

i believe thats the wrong way.
I know it works in that way, but:
- if you have only one node, do not use gfs, it's slow!
- if you have more than one node, use it -- and if you can, test gfs2
as weel (it should be more and more fast) -- but do not mount it (only
- i mean, you don't need it to be listed on a fstab) in fstab.
gfs works if only all the nodes are "up and running", which means, if
one node can't be reached, but is up (network or other problems
inolved) no one will use the gfs filesystem.
You must use it as a resorce, and you must have at least one fencing
method for each node in the cluster.
In this way, once a node becomes unreachable, it will be fenced and
the other nodes can write happily on the filesystem. This is because
if one node "can be considered up and maybe running" it may be writing
on the filesystem, or it can maybe think that it is the only one node
in the cluster (think ebout switch problem, or arp spoofing) than if
you try a "clust" command on that node you will see al  the other
nodes down and only that one up....this is why you must have  a
fencing method! that node HAS TO be shut down or reloaded, otherwise
the filesystem will be blocked, and no read o write can be issued by
any of the nodes in the cluster".

I am not talking about what it is in theory(never attended a RH
session), but believe me, in practice it works like that!

create a global resource (and always create a global resource even if
it is a fencing, or a vsftpd resource that every node has in common)
aqnd mount it in every node you need as a service. Do not think an
fstab entry is the better thing you can have, it is not, it can lock
you filesystem till all the nodes are really working and talking one
each other.


More information about the Linux-cluster mailing list