From ccaulfie at redhat.com Tue Mar 1 08:38:46 2016 From: ccaulfie at redhat.com (Christine Caulfield) Date: Tue, 1 Mar 2016 08:38:46 +0000 Subject: [Linux-cluster] make fails "config/libs/libccsconfdb/libccs.so: undefined reference to `confdb_key_iter_typed2'" In-Reply-To: <56D38441.5040603@richtercloud.de> References: <56D38441.5040603@richtercloud.de> Message-ID: <56D55516.3040304@redhat.com> it looks like you're trying to build the cman-based cluster services for corosync 2, that's not supported. If you want cman then you have to use corosync 1. However the latest, and maintained, cluster code is corosync 2 + pacemaker. Chrissie On 28/02/16 23:35, Karl-Philipp Richter wrote: > `./configure && make` fails with > > cc -o ccs_tool ccs_tool.o editconf.o > -L/mnt/main/sources/cluster/config/libs/libccsconfdb -lccs `xml2-config > --libs` -L/usr/lib > /mnt/main/sources/cluster/config/libs/libccsconfdb/libccs.so: > undefined reference to `confdb_key_iter_typed2' > /mnt/main/sources/cluster/config/libs/libccsconfdb/libccs.so: > undefined reference to `confdb_key_get_typed2' > collect2: error: ld returned 1 exit status > Makefile:29: recipe for target 'ccs_tool' failed > make[3]: *** [ccs_tool] Error 1 > > See https://travis-ci.org/krichter722/cluster/builds/112480859 for > details. Also experienced on Ubuntu 15.10. > > experienced with cluster-3.2.0-25-g720cbde > From krichter at posteo.de Tue Mar 1 15:50:31 2016 From: krichter at posteo.de (Karl-Philipp Richter) Date: Tue, 1 Mar 2016 16:50:31 +0100 Subject: [Linux-cluster] How to undo vgchange --clustered y? Message-ID: <56D5BA47.1090703@posteo.de> Hi, I invoked `vgchange --clustered y [name]` and accepted the warning that the volume group might become inaccessible by mistaking inaccessible with unavailable for other cluster nodes. Since `clvm` doesn't work on Ubuntu 15.10 and building from source is painfulhttp://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10 I seem to have no chance to ever access the clustered volume group. Is there any solution to make the volume group accessible again? -Kalle -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From rpeterso at redhat.com Tue Mar 1 16:02:11 2016 From: rpeterso at redhat.com (Bob Peterson) Date: Tue, 1 Mar 2016 11:02:11 -0500 (EST) Subject: [Linux-cluster] How to undo vgchange --clustered y? In-Reply-To: <56D5BA47.1090703@posteo.de> References: <56D5BA47.1090703@posteo.de> Message-ID: <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com> ----- Original Message ----- > Hi, > I invoked `vgchange --clustered y [name]` and accepted the warning that > the volume group might become inaccessible by mistaking inaccessible > with unavailable for other cluster nodes. Since `clvm` doesn't work on > Ubuntu 15.10 and building from source is > painfulhttp://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10 > I seem to have no chance to ever access the clustered volume group. Is > there any solution to make the volume group accessible again? > > -Kalle Hi Kalle, I had this problem once a long time ago. I think what I did was: I exported the LUN on the SAN to a working cluster with clvmd, and did vgchange -cn from there. You probably don't have that option. You could try vgchange -fff, but I haven't tried it myself so I don't know if it will work. Regards, Bob Peterson Red Hat File Systems From bmr at redhat.com Tue Mar 1 16:33:05 2016 From: bmr at redhat.com (Bryn M. Reeves) Date: Tue, 1 Mar 2016 16:33:05 +0000 Subject: [Linux-cluster] How to undo vgchange --clustered y? In-Reply-To: <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com> References: <56D5BA47.1090703@posteo.de> <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com> Message-ID: <20160301163304.GA29278@hex.redhat.com> On Tue, Mar 01, 2016 at 11:02:11AM -0500, Bob Peterson wrote: > I had this problem once a long time ago. > I think what I did was: I exported the LUN on the SAN to a working > cluster with clvmd, and did vgchange -cn from there. You probably don't > have that option. You could try vgchange -fff, but I haven't tried it myself > so I don't know if it will work. The easiest and safest way to do it is to use lvm2's --config switch to temporarily disable clustered locking for the vgchange command. Of course, before doing this, you should check to ensure that the cluster really is disabled/inquorate or otherwise unusual (or in the case of an accidental "vgchange -cy" that it doesn't exist :). The command looks like: # vgchange -cn $vgname --config 'global {locking_type = 0}' Where $vgname is the name of the VG to modify. If you're on Red Hat and have a portal account there's some additional detail in the following kbase article: https://access.redhat.com/solutions/3618 Regards, Bryn. From teigland at redhat.com Tue Mar 1 16:35:07 2016 From: teigland at redhat.com (David Teigland) Date: Tue, 1 Mar 2016 10:35:07 -0600 Subject: [Linux-cluster] How to undo vgchange --clustered y? In-Reply-To: <56D5BA47.1090703@posteo.de> References: <56D5BA47.1090703@posteo.de> Message-ID: <20160301163507.GA7457@redhat.com> On Tue, Mar 01, 2016 at 04:50:31PM +0100, Karl-Philipp Richter wrote: > Hi, > I invoked `vgchange --clustered y [name]` and accepted the warning that > the volume group might become inaccessible by mistaking inaccessible > with unavailable for other cluster nodes. Since `clvm` doesn't work on > Ubuntu 15.10 and building from source is > painfulhttp://askubuntu.com/questions/740615/how-to-get-clvmd-running-on-ubuntu-15-10 > I seem to have no chance to ever access the clustered volume group. Is > there any solution to make the volume group accessible again? vgchange -cn --config global/locking_type=0 vgname From krichter at posteo.de Fri Mar 4 16:02:02 2016 From: krichter at posteo.de (Karl-Philipp Richter) Date: Fri, 4 Mar 2016 17:02:02 +0100 Subject: [Linux-cluster] How to undo vgchange --clustered y? In-Reply-To: <20160301163304.GA29278@hex.redhat.com> References: <56D5BA47.1090703@posteo.de> <9411495.32397096.1456848131364.JavaMail.zimbra@redhat.com> <20160301163304.GA29278@hex.redhat.com> Message-ID: <56D9B17A.3080407@posteo.de> Hi, Am 01.03.2016 um 17:33 schrieb Bryn M. Reeves: > The command looks like: > > # vgchange -cn $vgname --config 'global {locking_type = 0}' > > Where $vgname is the name of the VG to modify. That worked perfectly. Thanks a lot! -Kalle -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From shreekant.jena at gmail.com Sat Mar 5 06:46:22 2016 From: shreekant.jena at gmail.com (Shreekant Jena) Date: Sat, 5 Mar 2016 12:16:22 +0530 Subject: [Linux-cluster] CMAN Failed to start on Secondary Node Message-ID: Dear All, I have a 2 node cluster but after reboot secondary node is showing offline . And cman failed to start . Please find below logs on secondary node:- root at EI51SPM1 cluster]# clustat msg_open: Invalid argument Member Status: Inquorate Resource Group Manager not running; no service information available. Membership information not available [root at EI51SPM1 cluster]# tail -10 /var/log/messages Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect: Connection refused Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing connection. Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect: Connection refused Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing connection. Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect: Connection refused Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing connection. Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect: Connection refused Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request [root at EI51SPM1 cluster]# [root at EI51SPM1 cluster]# cman_tool status Protocol version: 5.0.1 Config version: 166 Cluster name: IVRS_DB Cluster ID: 9982 Cluster Member: No Membership state: Joining [root at EI51SPM1 cluster]# cman_tool nodes Node Votes Exp Sts Name [root at EI51SPM1 cluster]# [root at EI51SPM1 cluster]# Thanks & regards SHREEKANTA JENA -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at alteeve.ca Sat Mar 5 06:47:59 2016 From: lists at alteeve.ca (Digimer) Date: Sat, 5 Mar 2016 01:47:59 -0500 Subject: [Linux-cluster] CMAN Failed to start on Secondary Node In-Reply-To: References: Message-ID: <56DA811F.6090906@alteeve.ca> Please share your cluster.conf (only obfuscate passwords please) and the output of 'clustat' from each node. digimer On 05/03/16 01:46 AM, Shreekant Jena wrote: > Dear All, > > I have a 2 node cluster but after reboot secondary node is showing > offline . And cman failed to start . > > Please find below logs on secondary node:- > > root at EI51SPM1 cluster]# clustat > msg_open: Invalid argument > Member Status: Inquorate > > Resource Group Manager not running; no service information available. > > Membership information not available > [root at EI51SPM1 cluster]# tail -10 /var/log/messages > Feb 24 13:36:23 EI51SPM1 ccsd[25487]: Error while processing connect: > Connection refused > Feb 24 13:36:23 EI51SPM1 kernel: CMAN: sending membership request > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing > connection. > Feb 24 13:36:27 EI51SPM1 ccsd[25487]: Error while processing connect: > Connection refused > Feb 24 13:36:28 EI51SPM1 kernel: CMAN: sending membership request > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing > connection. > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect: > Connection refused > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Cluster is not quorate. Refusing > connection. > Feb 24 13:36:32 EI51SPM1 ccsd[25487]: Error while processing connect: > Connection refused > Feb 24 13:36:33 EI51SPM1 kernel: CMAN: sending membership request > [root at EI51SPM1 cluster]# > [root at EI51SPM1 cluster]# cman_tool status > Protocol version: 5.0.1 > Config version: 166 > Cluster name: IVRS_DB > Cluster ID: 9982 > Cluster Member: No > Membership state: Joining > [root at EI51SPM1 cluster]# cman_tool nodes > Node Votes Exp Sts Name > [root at EI51SPM1 cluster]# > [root at EI51SPM1 cluster]# > > > Thanks & regards > SHREEKANTA JENA > > > -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From shreekant.jena at gmail.com Sat Mar 5 09:47:26 2016 From: shreekant.jena at gmail.com (Shreekant Jena) Date: Sat, 5 Mar 2016 15:17:26 +0530 Subject: [Linux-cluster] CMAN Failed to start on Secondary Node In-Reply-To: <56DA811F.6090906@alteeve.ca> References: <56DA811F.6090906@alteeve.ca> Message-ID: secondary node -------------------------------------- [root at Node2 ~]# cat /etc/cluster/cluster.conf