From dawood.m at msystechnologies.com Tue Apr 4 14:50:03 2017 From: dawood.m at msystechnologies.com (Dawood Munavar S M) Date: Tue, 4 Apr 2017 20:20:03 +0530 Subject: [Linux-cluster] Mount error: gfs2: rhel 7.2: Transport endpoint is not connected Message-ID: Hello Team, I am working on creating cluster configurations using pacemaker on Rhel 7.2 server connected to a SAN storage controller and currently I am struck with the below issue. It would be great if you can help me in resolving this issue. *Node1: hostname: node1-atto* *Node2: hostname: node2-emulex* [root at node1-atto ~]# mount -t gfs2 /dev/vol_group/lv /mnt/ mount: mount /dev/mapper/vol_group-lv on /mnt failed: Transport endpoint is not connected *Steps completed:* 1. Completed the cluster configurations and fencing (scsi_fence) with 2 nodes. 2. Created LVM using pvcreate, vgcreate, lvcreate on one node and other node could see the created LVM successfully. 3. Created a FS with gfs2 using the below command on both the nodes - successful, mkfs.gfs2 -p lock_dlm -t mycluster:gfs -j 2 /dev/vol_group/lv *Please find the attachment - File1.xlsx* 4. When I try to mount, I see the below message, mount: mount /dev/mapper/vol_group-lv on /mnt failed: Transport endpoint is not connected *Troubleshooting steps:* 1. Verified by creating with ext4 FS, mounting is successful on both the nodes. 2. Verfied by creating with below command gfs2 FS with lock_nolock options, mounting is successful on both the nodes mkfs.gfs2 -p lock_nolock -t mycluster:gfs -j 2 /dev/vol_group/lv * Issue consisterntly occurs only on lock_dlm options * *Other steps performed by following Redhat forums:* Still I am facing the same mount issue. Please find the attached document File2.docx *Query:* 1. Currently I haven't installed lvm2-cluster package on both the nodes due to dependancy issues. Does lvm2-cluster package is mandatory on pacemaker for mounting gfs2 on rhel 7.2 2. We have bought only one subscription for ha packages and we have attached on Node1 and installed the related packages and reinstalled from Node 1 and installed on Node2. Will this have an impact to our above mount issue? Please find the attached document File3.docx Thanks, Munavar. Virus-free. www.avg.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: File1.xlsx Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Size: 9359 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: File2.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 15118 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: File3.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 12379 bytes Desc: not available URL: From rpeterso at redhat.com Tue Apr 4 14:58:52 2017 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 4 Apr 2017 10:58:52 -0400 (EDT) Subject: [Linux-cluster] Mount error: gfs2: rhel 7.2: Transport endpoint is not connected In-Reply-To: References: Message-ID: <1397909917.10839107.1491317932703.JavaMail.zimbra@redhat.com> ----- Original Message ----- | Hello Team, | | I am working on creating cluster configurations using pacemaker on Rhel 7.2 | server connected to a SAN storage controller and currently I am struck with | the below issue. | It would be great if you can help me in resolving this issue. | | | *Node1: hostname: node1-atto* | | *Node2: hostname: node2-emulex* | [root at node1-atto ~]# mount -t gfs2 /dev/vol_group/lv /mnt/ | mount: mount /dev/mapper/vol_group-lv on /mnt failed: Transport endpoint is | not connected | | *Steps completed:* | | 1. Completed the cluster configurations and fencing (scsi_fence) with 2 | nodes. | 2. Created LVM using pvcreate, vgcreate, lvcreate on one node and other | node could see the created LVM successfully. | 3. Created a FS with gfs2 using the below command on both the nodes - | successful, | mkfs.gfs2 -p lock_dlm -t mycluster:gfs -j 2 /dev/vol_group/lv | *Please find the attachment - File1.xlsx* | 4. When I try to mount, I see the below message, | mount: mount /dev/mapper/vol_group-lv on /mnt failed: Transport endpoint is | not connected | | *Troubleshooting steps:* | | 1. Verified by creating with ext4 FS, mounting is successful on both the | nodes. | 2. Verfied by creating with below command gfs2 FS with lock_nolock options, | mounting is successful on both the nodes | mkfs.gfs2 -p lock_nolock -t mycluster:gfs -j 2 /dev/vol_group/lv | * Issue consisterntly occurs only on lock_dlm options * | | *Other steps performed by following Redhat forums:* Still I am facing the | same mount issue. Please find the attached document File2.docx | | *Query:* | | 1. Currently I haven't installed lvm2-cluster package on both the nodes due | to dependancy issues. Does lvm2-cluster package is mandatory on pacemaker | for mounting gfs2 on rhel 7.2 | 2. We have bought only one subscription for ha packages and we have | attached on Node1 and installed the related packages and reinstalled from | Node 1 and installed on Node2. | Will this have an impact to our above mount issue? | Please find the attached document File3.docx | | Thanks, | Munavar. Hi Munavar, If this is truly RHEL, please contact Red Hat support and file a ticket. That message means GFS2 is not communicating through DLM. That could be caused by a number of things, such as: 1. Lock table not matching your cluster name before ":" 2. Same lock table after ":" as another mounted gfs2 file system. 3. Not enough journals to use Regards, Bob Peterson Red Hat File Systems From costan at amg.it Tue Apr 11 10:35:18 2017 From: costan at amg.it (Andrea Costantino) Date: Tue, 11 Apr 2017 12:35:18 +0200 Subject: [Linux-cluster] clvmd issue with RH/Centos 6.9 update Message-ID: <005401d2b2af$50f85a80$f2e90f80$@amg.it> Hello fellow cluster guys, I just upgraded my CentOS cluster to 6.9 and after a node reboot I discovered that the clvm functionality was broken. Long story made short, I tracked it back to clvmd process being hung with one CPU stuck. The quick and dirty solution was to rollback kernel to version 2.6.32-642.15.1.el6.x86_64, and the problem did not happen anymore. Anyway after opening a bugzilla, RH guys pointed me to the real issue, that is a libqb thread issue. The bug (containing the temporary fix, waiting for the next packages upgrade) is tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1440160 Hope it helps others. Ciao, A. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dawood.m at msystechnologies.com Fri Apr 28 10:34:23 2017 From: dawood.m at msystechnologies.com (Dawood Munavar S M) Date: Fri, 28 Apr 2017 16:04:23 +0530 Subject: [Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document Message-ID: Hello All, Could you please share any links/documents to create NFS HA cluster over gfs2 file system using Pacemaker. Currently I have completed till mounting of gfs2 file systems on cluster nodes and now I need to create cluster resources for NFS server, exports and mount on client. Thanks, Munavar. Virus-free. www.avg.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Fri Apr 28 15:25:50 2017 From: lists at alteeve.ca (Digimer) Date: Fri, 28 Apr 2017 11:25:50 -0400 Subject: [Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document In-Reply-To: References: Message-ID: On 28/04/17 06:34 AM, Dawood Munavar S M wrote: > Hello All, > > Could you please share any links/documents to create NFS HA cluster over > gfs2 file system using Pacemaker. > > Currently I have completed till mounting of gfs2 file systems on cluster > nodes and now I need to create cluster resources for NFS server, exports > and mount on client. > > Thanks, > Munavar. I use gfs2 quite a bit, but not nfs. Can I make a suggestion? Don't use gfs2 for this. You will have much better performance if you use an active/passive failover with a non-clustered FS. GFS2, like any cluster FS, needs to have the cluster handle locks which is always going to be slower (by a fair amount) than traditional internal FS locking. The common NFS HA cluster setup is to have the cluster promote/connect the backing storage (drbd/iscsi), mount the FS, start nfs and then take a floating IP address. GFS2 is an excellent FS for situations where it is needed, and should be avoided anywhere possible. :) -- Digimer Papers and Projects: https://alteeve.com/w/ "I am, somehow, less interested in the weight and convolutions of Einstein?s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould From dawood.m at msystechnologies.com Fri Apr 28 15:51:14 2017 From: dawood.m at msystechnologies.com (Dawood Munavar S M) Date: Fri, 28 Apr 2017 21:21:14 +0530 Subject: [Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document Message-ID: Hi, Thanks for your reply and the valuable comments. Curently I am only sanity testing the Rhel 7.2 cluster configuration with our FC storage arrays, so performance really doesn't matters for us at the moment. Also with regard to creating nfs cluster resources over gfs2, we followed the below steps, but still "showmount -e" doesn't list the export entries, *Note:* This steps are followed after mounting gfs2 file systems on cluster nodes, 1. pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.2.90 cidr_netmask=24 op monitor interval=30s 2. pcs resource create NFS-D nfsserver nfs_shared_infodir=/global/nfsinfo nfs_ip=192.168.2.90 3. pcs resource create nfs-cm-shared exportfs clientspec= 192.168.2.0/255.255.255.0 options=rw,sync,no_root_squash directory=/SAP_SOFT fsid=0 4. Added resource dependancies 5. [root at node2-atto ~]# showmount -e 192.168.2.90 Export list for 192.168.2.90: **** No Entries ***** I went through RedHat forums and it is mentioned that Exporting a GFS2 filesystem in an Active/Active configuration is only supported when using *Samba+CTDB* to export the GFS2 filesystem. Please let us know if its mandatory to configure CTDB when nfs over gfs2 is configured or anyother options is available. Thanks, Munavar. On Fri, Apr 28, 2017 at 8:55 PM, Digimer wrote: > On 28/04/17 06:34 AM, Dawood Munavar S M wrote: > > Hello All, > > > > Could you please share any links/documents to create NFS HA cluster over > > gfs2 file system using Pacemaker. > > > > Currently I have completed till mounting of gfs2 file systems on cluster > > nodes and now I need to create cluster resources for NFS server, exports > > and mount on client. > > > > Thanks, > > Munavar. > > I use gfs2 quite a bit, but not nfs. > > Can I make a suggestion? Don't use gfs2 for this. > > You will have much better performance if you use an active/passive > failover with a non-clustered FS. GFS2, like any cluster FS, needs to > have the cluster handle locks which is always going to be slower (by a > fair amount) than traditional internal FS locking. > > The common NFS HA cluster setup is to have the cluster promote/connect > the backing storage (drbd/iscsi), mount the FS, start nfs and then take > a floating IP address. > > GFS2 is an excellent FS for situations where it is needed, and should be > avoided anywhere possible. :) > > -- > Digimer > Papers and Projects: https://alteeve.com/w/ > "I am, somehow, less interested in the weight and convolutions of > Einstein?s brain than in the near certainty that people of equal talent > have lived and died in cotton fields and sweatshops." - Stephen Jay Gould > Virus-free. www.avg.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Apr 28 22:23:38 2017 From: emi2fast at gmail.com (emmanuel segura) Date: Sat, 29 Apr 2017 00:23:38 +0200 Subject: [Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document In-Reply-To: References: Message-ID: can you provide pcs status ? 2017-04-28 17:51 GMT+02:00 Dawood Munavar S M : > Hi, > > Thanks for your reply and the valuable comments. > > Curently I am only sanity testing the Rhel 7.2 cluster configuration with > our FC storage arrays, so performance really doesn't matters for us at the > moment. > > Also with regard to creating nfs cluster resources over gfs2, we followed > the below steps, but still "showmount -e" doesn't list the export entries, > > *Note:* This steps are followed after mounting gfs2 file systems on > cluster nodes, > > 1. pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.2.90 > cidr_netmask=24 op monitor interval=30s > 2. pcs resource create NFS-D nfsserver nfs_shared_infodir=/global/nfsinfo > nfs_ip=192.168.2.90 > 3. pcs resource create nfs-cm-shared exportfs clientspec=192.168.2.0/255. > 255.255.0 options=rw,sync,no_root_squash directory=/SAP_SOFT fsid=0 > 4. Added resource dependancies > 5. [root at node2-atto ~]# showmount -e 192.168.2.90 > Export list for 192.168.2.90: > **** No Entries ***** > > I went through RedHat forums and it is mentioned that Exporting a GFS2 > filesystem in an Active/Active configuration is only supported when using > *Samba+CTDB* to export the GFS2 filesystem. Please let us know if its > mandatory to configure CTDB when nfs over gfs2 is configured or anyother > options is available. > > Thanks, > Munavar. > > On Fri, Apr 28, 2017 at 8:55 PM, Digimer wrote: > >> On 28/04/17 06:34 AM, Dawood Munavar S M wrote: >> > Hello All, >> > >> > Could you please share any links/documents to create NFS HA cluster over >> > gfs2 file system using Pacemaker. >> > >> > Currently I have completed till mounting of gfs2 file systems on cluster >> > nodes and now I need to create cluster resources for NFS server, exports >> > and mount on client. >> > >> > Thanks, >> > Munavar. >> >> I use gfs2 quite a bit, but not nfs. >> >> Can I make a suggestion? Don't use gfs2 for this. >> >> You will have much better performance if you use an active/passive >> failover with a non-clustered FS. GFS2, like any cluster FS, needs to >> have the cluster handle locks which is always going to be slower (by a >> fair amount) than traditional internal FS locking. >> >> The common NFS HA cluster setup is to have the cluster promote/connect >> the backing storage (drbd/iscsi), mount the FS, start nfs and then take >> a floating IP address. >> >> GFS2 is an excellent FS for situations where it is needed, and should be >> avoided anywhere possible. :) >> >> -- >> Digimer >> Papers and Projects: https://alteeve.com/w/ >> "I am, somehow, less interested in the weight and convolutions of >> Einstein?s brain than in the near certainty that people of equal talent >> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould >> > > > > Virus-free. > www.avg.com > > <#m_-400312905104324003_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > > DISCLAIMER > > The information in this e-mail is confidential and may be subject to legal > privilege. It is intended solely for the addressee. Access to this e-mail > by anyone else is unauthorized. If you have received this communication in > error, please address with the subject heading "Received in error," send to > it at msystechnologies.com, then delete the e-mail and destroy any copies > of it. If you are not the intended recipient, any disclosure, copying, > distribution or any action taken or omitted to be taken in reliance on it, > is prohibited and may be unlawful. The views, opinions, conclusions and > other information expressed in this electronic mail and any attachments are > not given or endorsed by the company unless otherwise indicated by an > authorized representative independent of this message. > MSys cannot guarantee that e-mail communications are secure or error-free, > as information could be intercepted, corrupted, amended, lost, destroyed, > arrive late or incomplete, or contain viruses, though all reasonable > precautions have been taken to ensure no viruses are present in this e-mail. > As our company cannot accept responsibility for any loss or damage arising > from the use of this e-mail or attachments we recommend that you subject > these to your virus checking procedures prior to use > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- .~. /V\ // \\ /( )\ ^`~'^ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dawood.m at msystechnologies.com Sat Apr 29 07:08:57 2017 From: dawood.m at msystechnologies.com (Dawood Munavar S M) Date: Sat, 29 Apr 2017 12:38:57 +0530 Subject: [Linux-cluster] Rhel 7.2 Pacemaker cluster - gfs2 file system- NFS document Message-ID: Hi Emmanuel, Please find the status below, [root at node1-emulex ~]# pcs status Cluster name: mycluster Stack: corosync Current DC: node2-atto (version 1.1.15-11.el7_3.4-e174ec8) - partition with quorum Last updated: Sat Apr 29 03:01:41 2017 Last change: Fri Apr 28 11:09:25 2017 by root via cibadmin on node2-atto 2 nodes and 10 resources configured Online: [ node1-emulex node2-atto ] Full list of resources: scsi (stonith:fence_scsi): Started node2-atto Clone Set: dlm-clone [dlm] Started: [ node1-emulex node2-atto ] Clone Set: clvmd-clone [clvmd] Started: [ node1-emulex node2-atto ] Clone Set: clusterfs-clone [clusterfs] Started: [ node1-emulex node2-atto ] ClusterIP (ocf::heartbeat:IPaddr2): Started node1-emulex NFS-D (ocf::heartbeat:nfsserver): Started node1-emulex nfs-cm-shared (ocf::heartbeat:exportfs): Started node2-atto Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled [root at node1-emulex ~]# [root at node1-emulex ~]# pcs status resources ClusterIP Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=172.30.59.253 cidr_netmask=19 Operations: start interval=0s timeout=20s (ClusterIP-start-interval-0s) stop interval=0s timeout=20s (ClusterIP-stop-interval-0s) monitor interval=30s (ClusterIP-monitor-interval-30s) [root at node1-emulex ~]# [root at node1-emulex ~]# pcs status resources NFS-D Resource: NFS-D (class=ocf provider=heartbeat type=nfsserver) Attributes: nfs_shared_infodir=/mnt/cluster/nfsinfo/ nfs_ip=172.30.59.253 Operations: start interval=0s timeout=40 (NFS-D-start-interval-0s) stop interval=0s timeout=20s (NFS-D-stop-interval-0s) monitor interval=10 timeout=20s (NFS-D-monitor-interval-10) [root at node1-emulex ~]# [root at node1-emulex ~]# pcs status resources nfs-cm-shared Resource: nfs-cm-shared (class=ocf provider=heartbeat type=exportfs) Attributes: clientspec=172.30.59.254/255.255.224.0 options=rw,sync,no_root_squash directory=/mnt/cluster/exports/ fsid=0 Operations: start interval=0s timeout=40 (nfs-cm-shared-start-interval-0s) stop interval=0s timeout=120 (nfs-cm-shared-stop-interval-0s) monitor interval=10 timeout=20 (nfs-cm-shared-monitor-interval-10) [root at node1-emulex ~]# [root at node1-emulex ~]# mount | grep gfs2 /dev/mapper/volgroup-vol on /mnt/cluster type gfs2 (rw,noatime,nodiratime,seclabel) /dev/mapper/volgroup-vol on /var/lib/nfs type gfs2 (rw,noatime,nodiratime,seclabel) [root at node1-emulex ~]# Thanks, Munavar. On Sat, Apr 29, 2017 at 3:53 AM, emmanuel segura wrote: > can you provide pcs status ? > > 2017-04-28 17:51 GMT+02:00 Dawood Munavar S M < > dawood.m at msystechnologies.com>: > >> Hi, >> >> Thanks for your reply and the valuable comments. >> >> Curently I am only sanity testing the Rhel 7.2 cluster configuration with >> our FC storage arrays, so performance really doesn't matters for us at the >> moment. >> >> Also with regard to creating nfs cluster resources over gfs2, we followed >> the below steps, but still "showmount -e" doesn't list the export entries, >> >> *Note:* This steps are followed after mounting gfs2 file systems on >> cluster nodes, >> >> 1. pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.2.90 >> cidr_netmask=24 op monitor interval=30s >> 2. pcs resource create NFS-D nfsserver nfs_shared_infodir=/global/nfsinfo >> nfs_ip=192.168.2.90 >> 3. pcs resource create nfs-cm-shared exportfs clientspec= >> 192.168.2.0/255.255.255.0 options=rw,sync,no_root_squash >> directory=/SAP_SOFT fsid=0 >> 4. Added resource dependancies >> 5. [root at node2-atto ~]# showmount -e 192.168.2.90 >> Export list for 192.168.2.90: >> **** No Entries ***** >> >> I went through RedHat forums and it is mentioned that Exporting a GFS2 >> filesystem in an Active/Active configuration is only supported when using >> *Samba+CTDB* to export the GFS2 filesystem. Please let us know if its >> mandatory to configure CTDB when nfs over gfs2 is configured or anyother >> options is available. >> >> Thanks, >> Munavar. >> >> On Fri, Apr 28, 2017 at 8:55 PM, Digimer wrote: >> >>> On 28/04/17 06:34 AM, Dawood Munavar S M wrote: >>> > Hello All, >>> > >>> > Could you please share any links/documents to create NFS HA cluster >>> over >>> > gfs2 file system using Pacemaker. >>> > >>> > Currently I have completed till mounting of gfs2 file systems on >>> cluster >>> > nodes and now I need to create cluster resources for NFS server, >>> exports >>> > and mount on client. >>> > >>> > Thanks, >>> > Munavar. >>> >>> I use gfs2 quite a bit, but not nfs. >>> >>> Can I make a suggestion? Don't use gfs2 for this. >>> >>> You will have much better performance if you use an active/passive >>> failover with a non-clustered FS. GFS2, like any cluster FS, needs to >>> have the cluster handle locks which is always going to be slower (by a >>> fair amount) than traditional internal FS locking. >>> >>> The common NFS HA cluster setup is to have the cluster promote/connect >>> the backing storage (drbd/iscsi), mount the FS, start nfs and then take >>> a floating IP address. >>> >>> GFS2 is an excellent FS for situations where it is needed, and should be >>> avoided anywhere possible. :) >>> >>> -- >>> Digimer >>> Papers and Projects: https://alteeve.com/w/ >>> "I am, somehow, less interested in the weight and convolutions of >>> Einstein?s brain than in the near certainty that people of equal talent >>> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould >>> >> >> >> >> Virus-free. >> www.avg.com >> >> <#m_7692602552533314402_m_-400312905104324003_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> >> DISCLAIMER >> >> The information in this e-mail is confidential and may be subject to >> legal privilege. It is intended solely for the addressee. Access to this >> e-mail by anyone else is unauthorized. If you have received this >> communication in error, please address with the subject heading "Received >> in error," send to it at msystechnologies.com, then delete the e-mail and >> destroy any copies of it. If you are not the intended recipient, any >> disclosure, copying, distribution or any action taken or omitted to be >> taken in reliance on it, is prohibited and may be unlawful. The views, >> opinions, conclusions and other information expressed in this electronic >> mail and any attachments are not given or endorsed by the company unless >> otherwise indicated by an authorized representative independent of this >> message. >> MSys cannot guarantee that e-mail communications are secure or >> error-free, as information could be intercepted, corrupted, amended, lost, >> destroyed, arrive late or incomplete, or contain viruses, though all >> reasonable precautions have been taken to ensure no viruses are present in >> this e-mail. As our company cannot accept responsibility for any loss or >> damage arising from the use of this e-mail or attachments we recommend that >> you subject these to your virus checking procedures prior to use >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > > > -- > .~. > /V\ > // \\ > /( )\ > ^`~'^ > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -- DISCLAIMER The information in this e-mail is confidential and may be subject to legal privilege. It is intended solely for the addressee. Access to this e-mail by anyone else is unauthorized. If you have received this communication in error, please address with the subject heading "Received in error," send to it at msystechnologies.com, then delete the e-mail and destroy any copies of it. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. The views, opinions, conclusions and other information expressed in this electronic mail and any attachments are not given or endorsed by the company unless otherwise indicated by an authorized representative independent of this message. MSys cannot guarantee that e-mail communications are secure or error-free, as information could be intercepted, corrupted, amended, lost, destroyed, arrive late or incomplete, or contain viruses, though all reasonable precautions have been taken to ensure no viruses are present in this e-mail. As our company cannot accept responsibility for any loss or damage arising from the use of this e-mail or attachments we recommend that you subject these to your virus checking procedures prior to use -------------- next part -------------- An HTML attachment was scrubbed... URL: From deepeshkumarpal at gmail.com Sat Apr 29 11:34:04 2017 From: deepeshkumarpal at gmail.com (deepesh kumar) Date: Sat, 29 Apr 2017 17:04:04 +0530 Subject: [Linux-cluster] HA cluster 6.5 redhat active passive Error Message-ID: Hi , Currently I am testing 2 node active passive cluster on RHEL 6.5 with ext4 fs. I have set up all basic settings ..like Luci , ricci and other services running and both the nodes are part of cluster. I am not able to make the clvmd configuration attributes for my vy . All required seetings in lvm.conf with tags for all nonshared vgs are in place. logs... Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: I am node #2 Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: Resource Group Manager Starting Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: Loading Service Data Apr 28 21:08:11 12RHAPPTR04V rgmanager[2183]: Initializing Services Apr 28 21:08:11 12RHAPPTR04V rgmanager[3212]: [lvm] HA LVM: Unable to get volume group attributes for /dev/mapper/shared_vg-ha_lv Apr 28 21:08:12 12RHAPPTR04V rgmanager[3249]: [lvm] Deactivating /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:12 12RHAPPTR04V rgmanager[3277]: [lvm] Making resilient : lvchange -an /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:12 12RHAPPTR04V rgmanager[3305]: [lvm] Resilient command: lvchange -an /dev/mapper/shared_vg-ha_lv//hadb --config devices{filter=["a|/dev/mapper/0QEMU_QEMU_HAR Apr 28 21:08:12 12RHAPPTR04V rgmanager[3330]: [lvm] lv_exec_resilient failed Apr 28 21:08:12 12RHAPPTR04V rgmanager[3354]: [lvm] lv_activate_resilient stop failed on /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:13 12RHAPPTR04V rgmanager[3379]: [lvm] Unable to deactivate /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:13 12RHAPPTR04V polkitd[3405]: started daemon version 0.96 using authority implementation `local' version `0.96' Apr 28 21:08:13 12RHAPPTR04V rtkit-daemon[3400]: Sucessfully made thread 3397 of process 3397 (/usr/bin/pulseaudio) owned by '42' high priority at nice level -11. Apr 28 21:08:13 12RHAPPTR04V rgmanager[3418]: [lvm] Failed to stop /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:13 12RHAPPTR04V rgmanager[3440]: [lvm] Attempting cleanup of /dev/mapper/shared_vg-ha_lv Apr 28 21:08:13 12RHAPPTR04V rgmanager[3465]: [lvm] Failed to stop /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:13 12RHAPPTR04V rgmanager[2183]: stop on lvm "UNIDB" returned 1 (generic error) Apr 28 21:08:14 12RHAPPTR04V rgmanager[3537]: [script] /usr/share/cluster/db2start.sh does not exist Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Services Initialized Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: State change: Local UP Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Starting stopped service service:Database Apr 28 21:08:14 12RHAPPTR04V rgmanager[3578]: [script] /usr/share/cluster/db2start.sh does not exist Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: start on script "DB2" returned 5 (program not installed) Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: #68: Failed to start service:Database; return value: 1 Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Stopping service service:Database Apr 28 21:08:15 12RHAPPTR04V rgmanager[3613]: [lvm] HA LVM: Unable to get volume group attributes for /dev/mapper/shared_vg-ha_lv Apr 28 21:08:15 12RHAPPTR04V rgmanager[3646]: [lvm] Deactivating /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:15 12RHAPPTR04V rgmanager[3668]: [lvm] Making resilient : lvchange -an /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:15 12RHAPPTR04V rgmanager[3693]: [lvm] Resilient command: lvchange -an /dev/mapper/shared_vg-ha_lv//hadb --config devices{filter=["a|/dev/mapper/0QEMU_QEMU_HAR Apr 28 21:08:15 12RHAPPTR04V rgmanager[3716]: [lvm] lv_exec_resilient failed Apr 28 21:08:16 12RHAPPTR04V rgmanager[3741]: [lvm] lv_activate_resilient stop failed on /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:16 12RHAPPTR04V rgmanager[3763]: [lvm] Unable to deactivate /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:16 12RHAPPTR04V rgmanager[3785]: [lvm] Failed to stop /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:16 12RHAPPTR04V rgmanager[3808]: [lvm] Attempting cleanup of /dev/mapper/shared_vg-ha_lv Apr 28 21:08:16 12RHAPPTR04V rgmanager[3831]: [lvm] Failed to stop /dev/mapper/shared_vg-ha_lv//hadb Apr 28 21:08:16 12RHAPPTR04V rgmanager[2183]: stop on lvm "UNIDB" returned 1 (generic error) Apr 28 21:08:17 12RHAPPTR04V rgmanager[3899]: [script] /usr/share/cluster/db2start.sh does not exist Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: stop on script "DB2" returned 5 (program not installed) Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: #12: RG service:Database failed to stop; intervention required Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: Service service:Database is failed Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: #13: Service service:Database failed to stop cleanly Apr 28 21:08:28 12RHAPPTR04V rgmanager[2183]: State change: 12RHAPPTR03V UP Apr 28 21:08:46 12RHAPPTR04V kernel: fuse init (API version 7.14) Apr 28 21:08:46 12RHAPPTR04V seahorse-daemon[4044]: DNS-SD initialization failed: Daemon not running Apr 28 21:08:46 12RHAPPTR04V seahorse-daemon[4044]: init gpgme version 1.1.8 Apr 28 21:08:46 12RHAPPTR04V pulseaudio[4099]: pid.c: Stale PID file, overwriting. Apr 28 21:09:38 12RHAPPTR04V ricci[4367]: Executing '/usr/bin/virsh nodeinfo' thanks Deepesh kumar -- DEEPESH KUMAR -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Sat Apr 29 17:14:35 2017 From: lists at alteeve.ca (Digimer) Date: Sat, 29 Apr 2017 13:14:35 -0400 Subject: [Linux-cluster] HA cluster 6.5 redhat active passive Error In-Reply-To: References: Message-ID: On 29/04/17 07:34 AM, deepesh kumar wrote: > Hi , > > Currently I am testing 2 node active passive cluster on RHEL 6.5 with > ext4 fs. > > I have set up all basic settings ..like Luci , ricci and other services > running and both the nodes are part of cluster. > > I am not able to make the clvmd configuration attributes for my vy . All > required seetings in lvm.conf with tags for all nonshared vgs are in place. > > logs... > > > Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: I am node #2 > Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: Resource Group Manager > Starting > Apr 28 21:08:07 12RHAPPTR04V rgmanager[2183]: Loading Service Data > Apr 28 21:08:11 12RHAPPTR04V rgmanager[2183]: Initializing Services > Apr 28 21:08:11 12RHAPPTR04V rgmanager[3212]: [lvm] HA LVM: Unable to > get volume group attributes for /dev/mapper/shared_vg-ha_lv > Apr 28 21:08:12 12RHAPPTR04V rgmanager[3249]: [lvm] Deactivating > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:12 12RHAPPTR04V rgmanager[3277]: [lvm] Making resilient : > lvchange -an /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:12 12RHAPPTR04V rgmanager[3305]: [lvm] Resilient command: > lvchange -an /dev/mapper/shared_vg-ha_lv//hadb --config > devices{filter=["a|/dev/mapper/0QEMU_QEMU_HAR > Apr 28 21:08:12 12RHAPPTR04V rgmanager[3330]: [lvm] lv_exec_resilient failed > Apr 28 21:08:12 12RHAPPTR04V rgmanager[3354]: [lvm] > lv_activate_resilient stop failed on /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:13 12RHAPPTR04V rgmanager[3379]: [lvm] Unable to deactivate > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:13 12RHAPPTR04V polkitd[3405]: started daemon version 0.96 > using authority implementation `local' version `0.96' > Apr 28 21:08:13 12RHAPPTR04V rtkit-daemon[3400]: Sucessfully made thread > 3397 of process 3397 (/usr/bin/pulseaudio) owned by '42' high priority > at nice level -11. > Apr 28 21:08:13 12RHAPPTR04V rgmanager[3418]: [lvm] Failed to stop > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:13 12RHAPPTR04V rgmanager[3440]: [lvm] Attempting cleanup > of /dev/mapper/shared_vg-ha_lv > Apr 28 21:08:13 12RHAPPTR04V rgmanager[3465]: [lvm] Failed to stop > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:13 12RHAPPTR04V rgmanager[2183]: stop on lvm "UNIDB" > returned 1 (generic error) > Apr 28 21:08:14 12RHAPPTR04V rgmanager[3537]: [script] > /usr/share/cluster/db2start.sh does not exist > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Services Initialized > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: State change: Local UP > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Starting stopped service > service:Database > Apr 28 21:08:14 12RHAPPTR04V rgmanager[3578]: [script] > /usr/share/cluster/db2start.sh does not exist > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: start on script "DB2" > returned 5 (program not installed) > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: #68: Failed to start > service:Database; return value: 1 > Apr 28 21:08:14 12RHAPPTR04V rgmanager[2183]: Stopping service > service:Database > Apr 28 21:08:15 12RHAPPTR04V rgmanager[3613]: [lvm] HA LVM: Unable to > get volume group attributes for /dev/mapper/shared_vg-ha_lv > Apr 28 21:08:15 12RHAPPTR04V rgmanager[3646]: [lvm] Deactivating > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:15 12RHAPPTR04V rgmanager[3668]: [lvm] Making resilient : > lvchange -an /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:15 12RHAPPTR04V rgmanager[3693]: [lvm] Resilient command: > lvchange -an /dev/mapper/shared_vg-ha_lv//hadb --config > devices{filter=["a|/dev/mapper/0QEMU_QEMU_HAR > Apr 28 21:08:15 12RHAPPTR04V rgmanager[3716]: [lvm] lv_exec_resilient failed > Apr 28 21:08:16 12RHAPPTR04V rgmanager[3741]: [lvm] > lv_activate_resilient stop failed on /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:16 12RHAPPTR04V rgmanager[3763]: [lvm] Unable to deactivate > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:16 12RHAPPTR04V rgmanager[3785]: [lvm] Failed to stop > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:16 12RHAPPTR04V rgmanager[3808]: [lvm] Attempting cleanup > of /dev/mapper/shared_vg-ha_lv > Apr 28 21:08:16 12RHAPPTR04V rgmanager[3831]: [lvm] Failed to stop > /dev/mapper/shared_vg-ha_lv//hadb > Apr 28 21:08:16 12RHAPPTR04V rgmanager[2183]: stop on lvm "UNIDB" > returned 1 (generic error) > Apr 28 21:08:17 12RHAPPTR04V rgmanager[3899]: [script] > /usr/share/cluster/db2start.sh does not exist > Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: stop on script "DB2" > returned 5 (program not installed) > Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: #12: RG service:Database > failed to stop; intervention required > Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: Service service:Database > is failed > Apr 28 21:08:17 12RHAPPTR04V rgmanager[2183]: #13: Service > service:Database failed to stop cleanly > Apr 28 21:08:28 12RHAPPTR04V rgmanager[2183]: State change: 12RHAPPTR03V UP > Apr 28 21:08:46 12RHAPPTR04V kernel: fuse init (API version 7.14) > Apr 28 21:08:46 12RHAPPTR04V seahorse-daemon[4044]: DNS-SD > initialization failed: Daemon not running > Apr 28 21:08:46 12RHAPPTR04V seahorse-daemon[4044]: init gpgme version 1.1.8 > Apr 28 21:08:46 12RHAPPTR04V pulseaudio[4099]: pid.c: Stale PID file, > overwriting. > Apr 28 21:09:38 12RHAPPTR04V ricci[4367]: Executing '/usr/bin/virsh > nodeinfo' > > > thanks > Deepesh kumar Hi Deepesh, You probably got a notice that the linux-cluster list is deprecated. I am replying to the new list, clusterlabs. You will want to subscribe there and continue over there as there are many more people on that list. For clvmd, you need to set lvm.conf to set; global { locking_type = 3; fallback_to_clustered_locking = 1 fallback_to_local_locking = 0 } This assumes you are not trying to use LVM and clustered LVM at the same time. If you are, you probably don't want to. If you do anyway, don't set the fallback variables. With this, you then start cman, then start clvmd. With clvmd running, new VGs default to clustered type. You can override this with 'vgcreate -c{y,n}'. If you still have trouble, please share your full cluster.conf (obfuscate passwords, please). -- Digimer Papers and Projects: https://alteeve.com/w/ "I am, somehow, less interested in the weight and convolutions of Einstein?s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops." - Stephen Jay Gould