[Linux-cluster] Configuration of a 2 node HA cluster with gfs

Lon Hohberger lhh at redhat.com
Fri Apr 15 14:59:27 UTC 2005


On Thu, 2005-04-14 at 09:53 +0200, birger wrote:

> - Mount the disks permanently on both nodes using gfs (less chance of
> nuking the file systems because of a split-brain)

The way GFS forcefully prevents I/O (which protects data!) is via
"fencing" (fibre channel zoning or a remote power controller/integrated
power control, etc).  This prevents the block I/Os from hitting the
disks for a node which has died, and works with any file system (not
just GFS).

Fencing is required in order for CMAN to operate in any useful capacity
in 2-node mode.

Anyway, to make this short: You probably want fencing for your solution.

> - Perhaps also run NFS services permanently on both nodes, failing over
> only the IP address of the official NFS service. Should make failover
> even faster, but are there pitfalls to running multiple NFS servers off
> the same gfs file system? In addition to failing over the IP address, I
> would have to look into how to take along NFS file locks when doing a
> takeover.

With GFS, the file locking should just kind of "work", but the client
would be required to fail over.  I don't think the Linux NFS client can
do this, but I believe the Solaris one can... (correct me if I'm wrong
here).

Failing over just an IP may work, but there may be some issues as well.
In any case, we should certainly *make* it work if it doesn't at the
moment, eh? :)

With a pure NFS failover solution (ex: on ext3, w/o replicated cluster
locks), there needs to be some changes to nfsd, lockd, and rpc.statd in
order to make lock failover work seamlessly.


> Can anyone 'talk me through' the steps needed to get this up and running?

Well, there's a start of the issues.

You can use rgmanager to do the IP and Samba failover.  Take a look at
"rgmanager/src/daemons/tests/*.conf".  I don't know how well Samba
failover has been tested.

-- Lon




More information about the Linux-cluster mailing list