I did kick up the logging on the node and it did give me that same info.  The way figured it out was by modifying the netfs.sh script to log what it was looking for and what it was actually finding.  It was looking for /mnt/mysql/ not /mnt/mysql.  This is according to the awk statement at the end of the isMounted function in that script.  Once I found out what it was looking for, I was able to get it to work.  This sounds like a bug to me more than anything.<br>
<br><div class="gmail_quote">On Tue, Apr 14, 2009 at 2:39 PM, vu pham <span dir="ltr"><<a href="mailto:vu@sivell.com">vu@sivell.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im"><br>
<br>
Spencer Parker wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I found my problem.  It was the trailing slash on /mnt/mysql<br>
</blockquote>
<br></div>
Glad that you found the problem. Just curious that your log didn't show any clear clue. Below is my log in a similar situation. It says pretty clear that there is something wrong in my mount parameters.<br>
<br>
Jan 16 15:33:22 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs started<br>
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> status on netfs "nfsdata" returned 1 (generic error)<br>
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> Stopping service service:nfs<br>
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is recovering<br>
Jan 16 15:33:27 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is now running on member 1<br>
Jan 16 15:33:36 xen2vm1 clurgmgrd[1790]: <notice> Recovering failed service service:nfs<br>
Jan 16 15:33:37 xen2vm1 clurgmgrd: [1790]: <err> 'mount  -o sync,soft,noac 172.16.254.14:/data /mnt/nfsdata/' failed, error=32<br>
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> start on netfs "nfsdata" returned 2 (invalid argument(s))<br>
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <warning> #68: Failed to start service:nfs; return value: 1<br>
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> Stopping service service:nfs<br>
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is recovering<br>
<br>
Btw, I use RHEL 5.2, just plain 5.2 from the DVD without any RHN update.<br>
<br>
Vu<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div><div></div><div class="h5">
<br>
On Tue, Apr 14, 2009 at 12:41 PM, Spencer Parker <<a href="mailto:sjpark@rondaandspencer.info" target="_blank">sjpark@rondaandspencer.info</a> <mailto:<a href="mailto:sjpark@rondaandspencer.info" target="_blank">sjpark@rondaandspencer.info</a>>> wrote:<br>

<br>
    <?xml version="1.0"?><br>
    <cluster alias="cluster" config_version="43" name="cluster"><br>
            <fence_daemon clean_start="0" post_fail_delay="0"<br>
    post_join_delay="3"/><br>
            <clusternodes><br>
                    <clusternode name="shadowhawk" nodeid="1" votes="1"><br>
                            <fence><br>
                                    <method name="1"><br>
                                            <device name="shadowhawk-ilo"/><br>
                                    </method><br>
                            </fence><br>
                    </clusternode><br>
                    <clusternode name="darkhawk" nodeid="2" votes="1"><br>
                            <fence><br>
                                    <method name="1"><br>
                                            <device name="darkhawk-ilo"/><br>
                                    </method><br>
                            </fence><br>
                    </clusternode><br>
            </clusternodes><br>
            <cman expected_votes="1" two_node="1"/><br>
            <fencedevices><br>
                    <fencedevice agent="fence_ilo"<br>
    hostname="darkhawk-ilo" login="cluster" name="darkhawk-ilo"<br>
    passwd="*******"/><br>
                    <fencedevice agent="fence_ilo"<br>
    hostname="shadowhawk-ilo" login="cluster" name="shadowhawk-ilo"<br>
    passwd="*******"/><br>
            </fencedevices><br>
            <rm log_level="7"><br>
                    <failoverdomains><br>
                            <failoverdomain name="failover"<br>
    nofailback="0" ordered="1" restricted="0"><br>
                                    <failoverdomainnode<br>
    name="shadowhawk" priority="1"/><br>
                                    <failoverdomainnode name="darkhawk"<br>
    priority="2"/><br>
                            </failoverdomain><br>
                    </failoverdomains><br>
                    <resources><br>
                            <ip address="10.10.200.25" monitor_link="1"/><br>
                            <script file="/etc/init.d/mysqld"<br>
    name="mysqld"/><br>
                            <script file="/etc/init.d/httpd" name="httpd"/><br>
                            <netfs export="/vol/test_mysql/mysql"<br>
    exportpath="/vol/test_mysql/mysql" force_unmount="1" fstype="nfs"<br>
    host="netapp" mountpoint="/mnt/mysql/" name="mysql_data"<br>
    nfstype="nfs"<br>
    options="defaults,rw,async,nfsvers=3,mountvers=3,proto=tcp"/><br>
                    </resources><br>
                    <service autostart="1" domain="failover"<br>
    exclusive="0" name="cluster" recovery="relocate"><br>
                            <ip ref="10.10.200.25"><br>
                                    <script ref="mysqld"/><br>
                                    <script ref="httpd"/><br>
                            </ip><br>
                    </service><br>
                    <service autostart="1" domain="failover"<br>
    exclusive="0" name="nfs" recovery="relocate"><br>
                            <netfs ref="mysql_data"/><br>
                    </service><br>
            </rm><br>
    </cluster><br>
<br>
<br>
<br>
    On Tue, Apr 14, 2009 at 12:30 PM, vu pham <<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a><br></div></div><div class="im">
    <mailto:<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a>>> wrote:<br>
<br>
<br>
        Spencer Parker wrote:<br>
<br>
            The NFS share is located on a NetApp box not running GFS.<br>
             The NFS share is only there to share the database<br>
            information for the MySQL resource.  The failure comes when<br>
            it goes to check the status of the NFS mount.<br>
<br>
            Jan  9 13:50:40 shadowhawk clurgmgrd[4212]: <notice> status<br>
            on netfs "mysql_data" returned 1 (generic error)<br>
<br>
            That is the error coming out of my log file.  It mounts the<br>
            NFS share just fine...and leaves it mounted as well.  When<br>
            it checks the status it then errors out.<br>
<br>
<br>
        What is your cluster.conf ?<br>
<br>
<br>
<br>
            On Tue, Apr 14, 2009 at 12:23 PM, vu pham <<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a><br></div>
            <mailto:<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a>> <mailto:<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a><div><div></div><div class="h5"><br>
            <mailto:<a href="mailto:vu@sivell.com" target="_blank">vu@sivell.com</a>>>> wrote:<br>
<br>
               Spencer Parker wrote:<br>
<br>
                   I am running a MySQL cluster using cluster services<br>
            and I have<br>
                   one issue when it comes to NFS.  The MySQl services<br>
            run fine<br>
                   until add in an NFS mount.  The NFS mount is where<br>
            all of the<br>
                   MySQl databases live at.  I can get the NFS share to<br>
            mount<br>
                   properly on the cluster machines, but the log files<br>
            keep telling<br>
                   it errors out.  Once it errors out the service then<br>
            stops.  I<br>
                   have tried restarting the service, but that has it<br>
            remounting<br>
                   the share over the top of the old one.  It never<br>
            unmounts the<br>
                   NFS share upon failure.  I can manually mount it and<br>
            it works<br>
                   fine...I can read-write to it just fine...I have added in<br>
                   options and taken them away.  All of this results in<br>
            the same<br>
                   end.  Any ideas?<br>
<br>
<br>
               Spencer,<br>
<br>
               How do you share the NFS ? Do you use GSF ?  What are the<br>
            error<br>
               messages ? Is the storage-device iscsi ?<br>
<br>
<br>
               Vu<br>
<br>
               --<br>
               Linux-cluster mailing list<br>
               <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
            <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a>><br>
            <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
            <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a>>><br>
<br>
               <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
<br>
<br>
<br>
            ------------------------------------------------------------------------<br>
<br>
<br>
            --<br>
            Linux-cluster mailing list<br>
            <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a> <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a>><br>
            <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
<br>
<br>
        --<br>
        Linux-cluster mailing list<br>
        <a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a> <mailto:<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a>><br>
        <a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
<br>
<br>
<br>
<br>
------------------------------------------------------------------------<br>
<br>
--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
</div></div></blockquote><div><div></div><div class="h5">
<br>
--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
</div></div></blockquote></div><br>