<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
I have an existing CentOS-5 cluster I've configured for
High-Availability NFS (v3). Everything is working fine. I've
included a simplified cluster.conf file below.<br>
<br>
I originally started with 3 file servers that were not clustered. I
converted to a clustered configuration where my NFS Clients never
get "stale nfs" error messages. When a node failed, all NFS exports
(and their associated IP address) would move to another system
faster than my clients could time out.<br>
<br>
I understand that changes to the portmapper in EL6 and NFSv4 make it
much more difficult to configure HA-NFS and, so far, I have not seen
any good documentation on how to configure a HA-NFS configuration in
EL6.<br>
<br>
Does anyone have any suggestions, or links to documentation that you
can send me?<br>
<br>
-RZ<br>
<br>
p.s. Simplified cluster.conf file for EL5...<br>
<blockquote>
<pre><?xml version="1.0"?>
<cluster alias="ha-nfs-el5" config_version="357" name="ha-nfs-el5">
<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
<clusternodes>
<clusternode name="node01.arlut.utexas.edu" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="node01-ilo"/>
</method>
<method name="2">
<device name="sanbox01" port="0"/>
</method>
</fence>
</clusternode>
<clusternode name="node02.arlut.utexas.edu" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="node02-ilo"/>
</method>
<method name="2">
<device name="sanbox02" port="0"/>
</method>
</fence>
</clusternode>
<clusternode name="node03.arlut.utexas.edu" nodeid="3" votes="1">
<fence>
<method name="1">
<device name="node03-ilo"/>
</method>
<method name="2">
<device name="sanbox03" port="0"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_sanbox2" ipaddr="sanbox01.arlut.utexas.edu" login="admin" name="sanbox01" passwd="password"/>
<fencedevice agent="fence_sanbox2" ipaddr="sanbox02.arlut.utexas.edu" login="admin" name="sanbox02" passwd="password"/>
<fencedevice agent="fence_sanbox2" ipaddr="sanbox03.arlut.utexas.edu" login="admin" name="sanbox03" passwd="password"/>
<fencedevice agent="fence_ilo" hostname="node01-ilo" login="Administrator" name="node01-ilo" passwd="DUMMY"/>
<fencedevice agent="fence_ilo" hostname="node02-ilo" login="Administrator" name="node02-ilo" passwd="DUMMY"/>
<fencedevice agent="fence_ilo" hostname="node03-ilo" login="Administrator" name="node03-ilo" passwd="DUMMY"/>
</fencedevices>
<rm>
<failoverdomains>
<failoverdomain name="nfs1-domain" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="node01.arlut.utexas.edu" priority="1"/>
<failoverdomainnode name="node02.arlut.utexas.edu" priority="2"/>
<failoverdomainnode name="node03.arlut.utexas.edu" priority="3"/>
</failoverdomain>
<failoverdomain name="nfs2-domain" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="node01.arlut.utexas.edu" priority="3"/>
<failoverdomainnode name="node02.arlut.utexas.edu" priority="1"/>
<failoverdomainnode name="node03.arlut.utexas.edu" priority="2"/>
</failoverdomain>
<failoverdomain name="nfs3-domain" nofailback="1" ordered="1" restricted="1">
<failoverdomainnode name="node01.arlut.utexas.edu" priority="2"/>
<failoverdomainnode name="node02.arlut.utexas.edu" priority="3"/>
<failoverdomainnode name="node03.arlut.utexas.edu" priority="1"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="192.168.1.1" monitor_link="1"/>
<ip address="192.168.1.2" monitor_link="1"/>
<ip address="192.168.1.3" monitor_link="1"/>
<fs device="/dev/cvg00/volume01" force_fsck="0" force_unmount="1" fsid="49388" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
<fs device="/dev/cvg00/volume02" force_fsck="0" force_unmount="1" fsid="58665" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
<fs device="/dev/cvg00/volume03" force_fsck="0" force_unmount="1" fsid="61028" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
<nfsclient allow_recover="1" name="local-subnet" options="rw,insecure" target="192.168.1.0/24"/>
</resources>
<service autostart="1" domain="nfs1-domain" exclusive="0" name="nfs1" nfslock="1" recovery="relocate">
<ip ref="192.168.1.1">
<fs __independent_subtree="1" ref="volume01">
<nfsexport name="nfs-cvg00-brazos02">
<nfsclient name=" " ref="local-subnet"/>
</nfsexport>
</fs>
</ip>
</service>
<service autostart="1" domain="nfs2-domain" exclusive="0" name="nfs2" nfslock="1" recovery="relocate">
<ip ref="192.168.1.2">
<fs __independent_subtree="1" ref="volume02">
<nfsexport name="nfs-sdd01-data02">
<nfsclient name=" " ref="local-subnet"/>
</nfsexport>
</fs>
</ip>
</service>
<service autostart="1" domain="nfs3-domain" exclusive="0" name="nfs3" nfslock="1" recovery="relocate">
<ip ref="192.168.1.3">
<fs __independent_subtree="1" ref="volume03">
<nfsclient name=" " ref="local-subnet"/>
</nfsexport>
</fs>
</ip>
</service>
</rm>
</cluster>
</pre>
</blockquote>
<pre class="moz-signature" cols="72">--
Randy Zagar Sr. Unix Systems Administrator
E-mail: <a class="moz-txt-link-abbreviated" href="mailto:zagar@arlut.utexas.edu">zagar@arlut.utexas.edu</a> Applied Research Laboratories
Phone: 512 835-3131 Univ. of Texas at Austin
</pre>
</body>
</html>