[Linux-cluster] NFS failover problem

Paul n McDowell Paul.McDowell at celera.com
Thu Aug 16 16:32:10 UTC 2007

Hi Maciej,

I'm not sure about these recommendations.  I agree with Peter Kruse's 
comments about creating a symbolic link for /var/lib/nfs [1 point 4e].  I 
have multiple NFS services in my cluster each serving a different file 
system so what would I be soft linking /var/lib/nfs to?

And for the NFS statd recommendation [1 point 4f], again I have several 
NFS services running on my cluster, each with their own IP alias, so what 
should I be setting STATD_HOSTNAME to in /etc/sysconfig/network ?

I never had to make either of these changes in my previous NFS server 
running on a RHEL 3 cluster which worked perfectly.



Maciej Bogucki <maciej.bogucki at artegence.com> 
Sent by: linux-cluster-bounces at redhat.com
08/16/2007 11:55 AM
Please respond to
linux clustering <linux-cluster at redhat.com>

linux clustering <linux-cluster at redhat.com>

Re: [Linux-cluster] NFS failover problem

> If I use the same standard mount options on both clients (e.g. mount
> SERVER:/exportfs    /mountpoint -t nfs -o rw,noatime ) then everything
> works fine until I perform a failover.  At that point the RHEL 3 client
> is OK but the RHEL 4 client can no longer stat the filesystem (df
> hangs).  If I move the service back the hung df command completes.  I
> don't see an I/O error per say but any copies to and from that
> mountpoint are inactive until I relocate the service back.


I suppose that You have problem with NFS lock manager [1 point 4e] or
with NFS statd[1 point 4f].

[1] - http://www.linux-ha.org/DRBD/NFS

Best Regards
Maciej Bogucki

Linux-cluster mailing list
Linux-cluster at redhat.com

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20070816/037a32de/attachment.htm>

More information about the Linux-cluster mailing list