[Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document

Dawood Munavar S M dawood.m at msystechnologies.com
Fri Apr 28 15:51:14 UTC 2017


Thanks for your reply and the valuable comments.

Curently I am only sanity testing the Rhel 7.2 cluster configuration with
our FC storage arrays, so performance really doesn't matters for us at the

Also with regard to creating nfs cluster resources over gfs2, we followed
the below steps, but still "showmount -e" doesn't list the export entries,

*Note:* This steps are followed after mounting gfs2 file systems on cluster

1. pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=
cidr_netmask=24 op monitor interval=30s
2. pcs resource create NFS-D nfsserver nfs_shared_infodir=/global/nfsinfo
3. pcs resource create nfs-cm-shared exportfs clientspec= options=rw,sync,no_root_squash
directory=/SAP_SOFT fsid=0
4. Added resource dependancies
5. [root at node2-atto ~]# showmount -e
Export list for
**** No Entries *****

I went through RedHat forums and it is mentioned that Exporting a GFS2
filesystem in an Active/Active configuration is only supported when using
*Samba+CTDB* to export the GFS2 filesystem. Please let us know if its
mandatory to configure CTDB when nfs over gfs2 is configured or anyother
options is available.


On Fri, Apr 28, 2017 at 8:55 PM, Digimer <lists at alteeve.ca> wrote:

> On 28/04/17 06:34 AM, Dawood Munavar S M wrote:
> > Hello All,
> >
> > Could you please share any links/documents to create NFS HA cluster over
> > gfs2 file system using Pacemaker.
> >
> > Currently I have completed till mounting of gfs2 file systems on cluster
> > nodes and now I need to create cluster resources for NFS server, exports
> > and mount on client.
> >
> > Thanks,
> > Munavar.
> I use gfs2 quite a bit, but not nfs.
> Can I make a suggestion? Don't use gfs2 for this.
> You will have much better performance if you use an active/passive
> failover with a non-clustered FS. GFS2, like any cluster FS, needs to
> have the cluster handle locks which is always going to be slower (by a
> fair amount) than traditional internal FS locking.
> The common NFS HA cluster setup is to have the cluster promote/connect
> the backing storage (drbd/iscsi), mount the FS, start nfs and then take
> a floating IP address.
> GFS2 is an excellent FS for situations where it is needed, and should be
> avoided anywhere possible. :)
> --
> Digimer
> Papers and Projects: https://alteeve.com/w/
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould




The information in this e-mail is confidential and may be subject to legal 
privilege. It is intended solely for the addressee. Access to this e-mail 
by anyone else is unauthorized. If you have received this communication in 
error, please address with the subject heading "Received in error," send to 
it at msystechnologies.com,  then delete the e-mail and destroy any copies of 
it. If you are not the intended recipient, any disclosure, copying, 
distribution or any action taken or omitted to be taken in reliance on it, 
is prohibited and may be unlawful. The views, opinions, conclusions and 
other information expressed in this electronic mail and any attachments are 
not given or endorsed by the company unless otherwise indicated by an 
authorized representative independent of this message.
MSys cannot guarantee that e-mail communications are secure or error-free, 
as information could be intercepted, corrupted, amended, lost, destroyed, 
arrive late or incomplete, or contain viruses, though all reasonable 
precautions have been taken to ensure no viruses are present in this e-mail. 
As our company cannot accept responsibility for any loss or damage arising 
from the use of this e-mail or attachments we recommend that you subject 
these to your virus checking procedures prior to use
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20170428/1ba53590/attachment.htm>

More information about the Linux-cluster mailing list