If look well, you are missing nfsexport in the second services<br><br><div class="gmail_quote">2012/5/14 Randy Zagar <span dir="ltr"><<a href="mailto:zagar@arlut.utexas.edu" target="_blank">zagar@arlut.utexas.edu</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    I have an existing CentOS-5 cluster I've configured for
    High-Availability NFS (v3).  Everything is working fine.  I've
    included a simplified cluster.conf file below.<br>
    <br>
    I originally started with 3 file servers that were not clustered.  I
    converted to a clustered configuration where my NFS Clients never
    get "stale nfs" error messages.  When a node failed, all NFS exports
    (and their associated IP address) would  move to another system
    faster than my clients could time out.<br>
    <br>
    I understand that changes to the portmapper in EL6 and NFSv4 make it
    much more difficult to configure HA-NFS and, so far, I have not seen
    any good documentation on how to configure a HA-NFS configuration in
    EL6.<br>
    <br>
    Does anyone have any suggestions, or links to documentation that you
    can send me?<br>
    <br>
    -RZ<br>
    <br>
    p.s.  Simplified cluster.conf file for EL5...<br>
    <blockquote>
      <pre><?xml version="1.0"?>
<cluster alias="ha-nfs-el5" config_version="357" name="ha-nfs-el5">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="<a href="http://node01.arlut.utexas.edu" target="_blank">node01.arlut.utexas.edu</a>" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="node01-ilo"/>
                                </method>
                                <method name="2">
                                        <device name="sanbox01" port="0"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="<a href="http://node02.arlut.utexas.edu" target="_blank">node02.arlut.utexas.edu</a>" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="node02-ilo"/>
                                </method>
                                <method name="2">
                                        <device name="sanbox02" port="0"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="<a href="http://node03.arlut.utexas.edu" target="_blank">node03.arlut.utexas.edu</a>" nodeid="3" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="node03-ilo"/>
                                </method>
                                <method name="2">
                                        <device name="sanbox03" port="0"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_sanbox2" ipaddr="<a href="http://sanbox01.arlut.utexas.edu" target="_blank">sanbox01.arlut.utexas.edu</a>" login="admin" name="sanbox01" passwd="password"/>
                <fencedevice agent="fence_sanbox2" ipaddr="<a href="http://sanbox02.arlut.utexas.edu" target="_blank">sanbox02.arlut.utexas.edu</a>" login="admin" name="sanbox02" passwd="password"/>
                <fencedevice agent="fence_sanbox2" ipaddr="<a href="http://sanbox03.arlut.utexas.edu" target="_blank">sanbox03.arlut.utexas.edu</a>" login="admin" name="sanbox03" passwd="password"/>
                <fencedevice agent="fence_ilo" hostname="node01-ilo" login="Administrator" name="node01-ilo" passwd="DUMMY"/>
                <fencedevice agent="fence_ilo" hostname="node02-ilo" login="Administrator" name="node02-ilo" passwd="DUMMY"/>
                <fencedevice agent="fence_ilo" hostname="node03-ilo" login="Administrator" name="node03-ilo" passwd="DUMMY"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="nfs1-domain" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="<a href="http://node01.arlut.utexas.edu" target="_blank">node01.arlut.utexas.edu</a>" priority="1"/>
                                <failoverdomainnode name="<a href="http://node02.arlut.utexas.edu" target="_blank">node02.arlut.utexas.edu</a>" priority="2"/>
                                <failoverdomainnode name="<a href="http://node03.arlut.utexas.edu" target="_blank">node03.arlut.utexas.edu</a>" priority="3"/>
                        </failoverdomain>
                        <failoverdomain name="nfs2-domain" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="<a href="http://node01.arlut.utexas.edu" target="_blank">node01.arlut.utexas.edu</a>" priority="3"/>
                                <failoverdomainnode name="<a href="http://node02.arlut.utexas.edu" target="_blank">node02.arlut.utexas.edu</a>" priority="1"/>
                                <failoverdomainnode name="<a href="http://node03.arlut.utexas.edu" target="_blank">node03.arlut.utexas.edu</a>" priority="2"/>
                        </failoverdomain>
                        <failoverdomain name="nfs3-domain" nofailback="1" ordered="1" restricted="1">
                                <failoverdomainnode name="<a href="http://node01.arlut.utexas.edu" target="_blank">node01.arlut.utexas.edu</a>" priority="2"/>
                                <failoverdomainnode name="<a href="http://node02.arlut.utexas.edu" target="_blank">node02.arlut.utexas.edu</a>" priority="3"/>
                                <failoverdomainnode name="<a href="http://node03.arlut.utexas.edu" target="_blank">node03.arlut.utexas.edu</a>" priority="1"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="192.168.1.1" monitor_link="1"/>
                        <ip address="192.168.1.2" monitor_link="1"/>
                        <ip address="192.168.1.3" monitor_link="1"/>
                        <fs device="/dev/cvg00/volume01" force_fsck="0" force_unmount="1" fsid="49388" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
                        <fs device="/dev/cvg00/volume02" force_fsck="0" force_unmount="1" fsid="58665" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
                        <fs device="/dev/cvg00/volume03" force_fsck="0" force_unmount="1" fsid="61028" fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>
                        <nfsclient allow_recover="1" name="local-subnet" options="rw,insecure" target="<a href="http://192.168.1.0/24" target="_blank">192.168.1.0/24</a>"/>
                </resources>
                <service autostart="1" domain="nfs1-domain" exclusive="0" name="nfs1" nfslock="1" recovery="relocate">
                        <ip ref="192.168.1.1">
                                <fs __independent_subtree="1" ref="volume01">
                                        <nfsexport name="nfs-cvg00-brazos02">
                                                <nfsclient name=" " ref="local-subnet"/>
                                        </nfsexport>
                                </fs>
                        </ip>
                </service>
                <service autostart="1" domain="nfs2-domain" exclusive="0" name="nfs2" nfslock="1" recovery="relocate">
                        <ip ref="192.168.1.2">
                                <fs __independent_subtree="1" ref="volume02">
                                        <nfsexport name="nfs-sdd01-data02">
                                                <nfsclient name=" " ref="local-subnet"/>
                                        </nfsexport>
                                </fs>
                        </ip>
                </service>
                <service autostart="1" domain="nfs3-domain" exclusive="0" name="nfs3" nfslock="1" recovery="relocate">
                        <ip ref="192.168.1.3">
                                <fs __independent_subtree="1" ref="volume03">
                                                <nfsclient name=" " ref="local-subnet"/>
                                        </nfsexport>
                                </fs>
                        </ip>
                </service>
        </rm>
</cluster>

</pre><span class="HOEnZb"><font color="#888888">
    </font></span></blockquote><span class="HOEnZb"><font color="#888888">
    <pre cols="72">-- 
Randy Zagar                               Sr. Unix Systems Administrator
E-mail: <a href="mailto:zagar@arlut.utexas.edu" target="_blank">zagar@arlut.utexas.edu</a>            Applied Research Laboratories
Phone: 512 835-3131                       Univ. of Texas at Austin

 </pre>
  </font></span></div>

<br>--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>