From cluster.labs at gmail.com Sun Feb 1 05:07:21 2015 From: cluster.labs at gmail.com (cluster lab) Date: Sun, 1 Feb 2015 08:37:21 +0330 Subject: [Linux-cluster] GFS2: "Could not open" the file on one of the nodes In-Reply-To: References: <678887466.3468720.1422537011406.JavaMail.zimbra@redhat.com> <149243946.3558706.1422543817506.JavaMail.zimbra@redhat.com> <1658385774.3582814.1422545696153.JavaMail.zimbra@redhat.com> <54CC6ADC.1000305@alteeve.ca> <54CC823D.1080602@alteeve.ca> Message-ID: A restart solved the problem ... But why ..? On Sat, Jan 31, 2015 at 11:22 AM, cluster lab wrote: > Excuse me for partially logs ... > > Jan 21 17:07:57 node2 fenced[47840]: fence node1 success > > > All other logs are about HA of VMs, ... and IO Error for this files ... > > Some new info: This problem occurred for about 4 files: > three of them cause IO error on node 3, and one of them on node 2 ... > > > > > On Sat, Jan 31, 2015 at 10:50 AM, Digimer wrote: >> On 31/01/15 01:52 AM, cluster lab wrote: >>> >>> Jan 21 17:07:43 ost-pvm2 fenced[47840]: fencing node ost-pvm1 >> >> >> There are no messages about this succeeding or failing... It looks like only >> 15 seconds seconds worth of logs. Can you please share the full amount of >> time I mentioned before, from both nodes? >> >> >> -- >> Digimer >> Papers and Projects: https://alteeve.ca/w/ >> What if the cure for cancer is trapped in the mind of a person without >> access to education? >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster From jpokorny at redhat.com Mon Feb 2 16:48:10 2015 From: jpokorny at redhat.com (Jan =?utf-8?Q?Pokorn=C3=BD?=) Date: Mon, 2 Feb 2015 17:48:10 +0100 Subject: [Linux-cluster] [Pacemaker] HA Summit Key-signing Party (was: Organizing HA Summit 2015) In-Reply-To: <20150126141438.GE21558@redhat.com> References: <540D853F.3090109@redhat.com> <54B4ADAA.5080803@alteeve.ca> <20150126141438.GE21558@redhat.com> Message-ID: <20150202164810.GA9404@redhat.com> On 26/01/15 15:14 +0100, Jan Pokorn? wrote: > Timeline? > Best if you send me your public keys before 2015-02-02. I will then > compile a list of the attendees together with their keys and publish > it at https://people.redhat.com/jpokorny/keysigning/2015-ha/ > so you can print it out and be ready for the party. > > Thanks for your cooperation, looking forward to this side-event and > hope this will be beneficial to all involved. Thanks for participating. Please print out https://people.redhat.com/jpokorny/keysigning/2015-ha/complete.html (best in landscape format), prior to checking your fingerprints there, indeed, prepare you ID document, and you are ready to proceed the signing event, which is currently planned on 2015-02-05 16:30 CET: http://plan.alteeve.ca/index.php/Main_Page#Feb_5th (I'll post an update should it change). -- Jan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lists at alteeve.ca Mon Feb 2 17:07:34 2015 From: lists at alteeve.ca (Digimer) Date: Mon, 02 Feb 2015 12:07:34 -0500 Subject: [Linux-cluster] [Pacemaker] HA Summit Key-signing Party In-Reply-To: <20150202164810.GA9404@redhat.com> References: <540D853F.3090109@redhat.com> <54B4ADAA.5080803@alteeve.ca> <20150126141438.GE21558@redhat.com> <20150202164810.GA9404@redhat.com> Message-ID: <54CFAED6.9050000@alteeve.ca> On 02/02/15 11:48 AM, Jan Pokorn? wrote: > On 26/01/15 15:14 +0100, Jan Pokorn? wrote: >> Timeline? >> Best if you send me your public keys before 2015-02-02. I will then >> compile a list of the attendees together with their keys and publish >> it at https://people.redhat.com/jpokorny/keysigning/2015-ha/ >> so you can print it out and be ready for the party. >> >> Thanks for your cooperation, looking forward to this side-event and >> hope this will be beneficial to all involved. > > Thanks for participating. > > Please print out > https://people.redhat.com/jpokorny/keysigning/2015-ha/complete.html > (best in landscape format), prior to checking your fingerprints > there, indeed, prepare you ID document, and you are ready to proceed > the signing event, which is currently planned on 2015-02-05 16:30 CET: > http://plan.alteeve.ca/index.php/Main_Page#Feb_5th > (I'll post an update should it change). Will there be a printer available in the room/area of the summit? If so, it might be good to set aside a bit of time to help people new to PGP get setup before the actual key-signing. -- Digimer Papers and Projects: https://alteeve.ca/w/ What if the cure for cancer is trapped in the mind of a person without access to education? From jpokorny at redhat.com Tue Feb 3 13:04:04 2015 From: jpokorny at redhat.com (Jan =?utf-8?Q?Pokorn=C3=BD?=) Date: Tue, 3 Feb 2015 14:04:04 +0100 Subject: [Linux-cluster] Call for keys, another round (Was: HA Summit Key-signing Party) In-Reply-To: <54CFAED6.9050000@alteeve.ca> References: <540D853F.3090109@redhat.com> <54B4ADAA.5080803@alteeve.ca> <20150126141438.GE21558@redhat.com> <20150202164810.GA9404@redhat.com> <54CFAED6.9050000@alteeve.ca> Message-ID: <20150203130404.GE9404@redhat.com> Update on the event below: On 02/02/15 12:07 -0500, Digimer wrote: > On 02/02/15 11:48 AM, Jan Pokorn? wrote: >> On 26/01/15 15:14 +0100, Jan Pokorn? wrote: >>> Timeline? >>> Best if you send me your public keys before 2015-02-02. I will then >>> compile a list of the attendees together with their keys and publish >>> it at https://people.redhat.com/jpokorny/keysigning/2015-ha/ >>> so you can print it out and be ready for the party. >>> >>> Thanks for your cooperation, looking forward to this side-event and >>> hope this will be beneficial to all involved. >> >> Thanks for participating. >> >> Please print out >> https://people.redhat.com/jpokorny/keysigning/2015-ha/complete.html >> (best in landscape format), prior to checking your fingerprints >> there, indeed, prepare your ID document, and you are ready to proceed >> the signing event, which is currently planned on 2015-02-05 16:30 CET: >> http://plan.alteeve.ca/index.php/Main_Page#Feb_5th >> (I'll post an update should it change). > > Will there be a printer available in the room/area of the summit? If so, it > might be good to set aside a bit of time to help people new to PGP get setup > before the actual key-signing. due to a popular demand and in order not to push back those jumping onboard after the preliminary deadline, let's give it one more round. Should you have any personal key still to run through the signing event, please send me you signed keys (preferably as per instructions [1]) by 2014-02-04 8:00 CET and I will compile an additional list that I'll hand out to you in a printed form during the summit (for practical reasons; still get complete.html printed on your own as per original plan if possible, please). Except for those who've already done that :] [1] https://www.redhat.com/archives/linux-cluster/2015-January/msg00020.html -- Jan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From yamato at redhat.com Thu Feb 5 09:57:58 2015 From: yamato at redhat.com (Masatake YAMATO) Date: Thu, 05 Feb 2015 18:57:58 +0900 (JST) Subject: [Linux-cluster] wrong error messages in fence-virt Message-ID: <20150205.185758.1478280784128087668.yamato@redhat.com> Is fence-virt still maintained? I cannot find the git repository for it. There is one at sf.net. However, it looks obsoleted. With my broken configuration, I got following debug output from fence_xvm... # fence_xvm -H targethost -o status -dddddd Debugging threshold is now 6 -- args @ 0x7fff762de810 -- ... Opening /dev/urandom Sending to 225.0.0.12 via 192.168.122.113 Waiting for connection from XVM host daemon. Issuing TCP challenge > read: Is a directory Invalid response to challenge Operation failed Look at the line marked with '>'. The error message is strange for me because as far as reading the source code, read is called with a socket connected to fence_virtd. So I conducted a code walking and found two bugs: 1. Checking the result of read( and write ) system call perror is called even if the call is successful. 2. "read" is specified as an argument for perror when write system call is faield. Both are not critical if fence_virtd is configured well. However, users may be confused when it is not well. Followig patch is not tested at all but it represents what I want to say in above list. Masatake YAMATO --- fence-virt-0.3.2/common/simple_auth.c 2013-11-05 01:08:35.000000000 +0900 +++ fence-virt-0.3.2/common/simple_auth.c.new 2015-02-05 18:40:53.471029118 +0900 @@ -260,9 +260,13 @@ return 0; } - if (read(fd, response, sizeof(response)) < sizeof(response)) { + ret = read(fd, response, sizeof(response)); + if (ret < 0) { perror("read"); return 0; + } else if (ret < sizeof(response)) { + fprintf(stderr, "RESPONSE is too short(%d) in %s\n", ret, __FUNCTION__); + return 0; } ret = !memcmp(response, hash, sizeof(response)); @@ -333,7 +337,7 @@ HASH_Destroy(h); if (write(fd, hash, sizeof(hash)) < sizeof(hash)) { - perror("read"); + perror("write"); return 0; } From yamato at redhat.com Thu Feb 5 13:04:04 2015 From: yamato at redhat.com (Masatake YAMATO) Date: Thu, 05 Feb 2015 22:04:04 +0900 (JST) Subject: [Linux-cluster] wrong error messages in fence-virt In-Reply-To: <20150205.185758.1478280784128087668.yamato@redhat.com> References: <20150205.185758.1478280784128087668.yamato@redhat.com> Message-ID: <20150205.220404.1627705244556936132.yamato@redhat.com> Mistakenly I sent older patch. New one is attached to this mail. Masatake YAMATO On Thu, 05 Feb 2015 18:57:58 +0900 (JST), Masatake YAMATO wrote: > Is fence-virt still maintained? > I cannot find the git repository for it. > There is one at sf.net. However, it looks obsoleted. > > With my broken configuration, I got following debug output from > fence_xvm... > > # fence_xvm -H targethost -o status -dddddd > Debugging threshold is now 6 > -- args @ 0x7fff762de810 -- > ... > Opening /dev/urandom > Sending to 225.0.0.12 via 192.168.122.113 > Waiting for connection from XVM host daemon. > Issuing TCP challenge >> read: Is a directory > Invalid response to challenge > Operation failed > > Look at the line marked with '>'. The error message is strange for me > because as far as reading the source code, read is called with a socket connected > to fence_virtd. > > So I conducted a code walking and found two bugs: > > 1. Checking the result of read( and write ) system call > > perror is called even if the call is successful. > > 2. "read" is specified as an argument for perror when write system call is faield. > > Both are not critical if fence_virtd is configured well. > However, users may be confused when it is not well. > > > Followig patch is not tested at all but it represents what I want to > say in above list. > > Masatake YAMATO > > > --- fence-virt-0.3.2/common/simple_auth.c 2013-11-05 01:08:35.000000000 +0900 > +++ fence-virt-0.3.2/common/simple_auth.c.new 2015-02-05 18:40:53.471029118 +0900 > @@ -260,9 +260,13 @@ > return 0; > } > > - if (read(fd, response, sizeof(response)) < sizeof(response)) { > + ret = read(fd, response, sizeof(response)); > + if (ret < 0) { > perror("read"); > return 0; > + } else if (ret < sizeof(response)) { > + fprintf(stderr, "RESPONSE is too short(%d) in %s\n", ret, __FUNCTION__); > + return 0; > } > > ret = !memcmp(response, hash, sizeof(response)); > @@ -333,7 +337,7 @@ > HASH_Destroy(h); > > if (write(fd, hash, sizeof(hash)) < sizeof(hash)) { > - perror("read"); > + perror("write"); > return 0; > } > -------------- next part -------------- A non-text attachment was scrubbed... Name: simple_auth.c.patch Type: text/x-patch Size: 1066 bytes Desc: not available URL: From yamato at redhat.com Mon Feb 9 08:19:12 2015 From: yamato at redhat.com (Masatake YAMATO) Date: Mon, 09 Feb 2015 17:19:12 +0900 (JST) Subject: [Linux-cluster] git repo of fence-virt Message-ID: <20150209.171912.399832125393796182.yamato@redhat.com> Hi, Before sending a PR(https://github.com/ryan-mccabe/fence-virt/pull/2) to you, I have to search the git repository for fence-virt again and again. Though http://sourceforge.net/p/fence-virt/code/ci/master/tree/ is not maintained, it is advertised in https://fedorahosted.org/cluster/wiki/FenceVirt. Could you update these old URL to the newer one(https://github.com/ryan-mccabe/fence-virt)? So people who got interests to fence-virt can work on the latest code. Regards, Masatake YAMATO From rmccabe at redhat.com Mon Feb 9 13:14:25 2015 From: rmccabe at redhat.com (Ryan McCabe) Date: Mon, 9 Feb 2015 08:14:25 -0500 Subject: [Linux-cluster] git repo of fence-virt In-Reply-To: <20150209.171912.399832125393796182.yamato@redhat.com> References: <20150209.171912.399832125393796182.yamato@redhat.com> Message-ID: <20150209131424.GA12489@redhat.com> On Mon, Feb 09, 2015 at 05:19:12PM +0900, Masatake YAMATO wrote: > Hi, > > Before sending a PR(https://github.com/ryan-mccabe/fence-virt/pull/2) to you, > I have to search the git repository for fence-virt again and again. > > Though http://sourceforge.net/p/fence-virt/code/ci/master/tree/ is not > maintained, it is advertised in https://fedorahosted.org/cluster/wiki/FenceVirt. > > Could you update these old URL to the newer one(https://github.com/ryan-mccabe/fence-virt)? > So people who got interests to fence-virt can work on the latest code. Hi, Thanks. I didn't realize it was pointing to the old repo. I'll get that corrected, and I'll see if Lon can take down the old sourceforge site. Thanks, Ryan updated and I'll see if maybe Lon can remove the From mgrac at redhat.com Mon Feb 9 14:34:32 2015 From: mgrac at redhat.com (Marek "marx" Grac) Date: Mon, 09 Feb 2015 15:34:32 +0100 Subject: [Linux-cluster] fence-agents-4.0.15 stable release Message-ID: <54D8C578.7030301@redhat.com> Welcome to the fence-agents 4.0.15 release This release includes several bugfixes: * Tripp Lite PDUs are now supported by fence_tripplite_snmp (symlink to fence_apc_snmp) * Default values in metadata sometimes differ to those actually used, this is fixed now * improvements in testing The new source tarball can be downloaded here: https://fedorahosted.org/releases/f/e/fence-agents/fence-agents-4.0.15.tar.xz To report bugs or issues: https://bugzilla.redhat.com/ Would you like to meet the cluster team or members of its community? Join us on IRC (irc.freenode.net #linux-cluster) and share your experience with other sysadministrators or power users. Thanks/congratulations to all people that contributed to achieve this great milestone. m, From lars.ellenberg at linbit.com Thu Feb 12 00:29:35 2015 From: lars.ellenberg at linbit.com (Lars Ellenberg) Date: Thu, 12 Feb 2015 01:29:35 +0100 Subject: [Linux-cluster] Call for review of undocumented parameters in resource agent meta data In-Reply-To: <20150130205249.GA24674@walrus.homenet> References: <20150130205249.GA24674@walrus.homenet> Message-ID: <20150212002935.GC20897@soda.linbit> On Fri, Jan 30, 2015 at 09:52:49PM +0100, Dejan Muhamedagic wrote: > Hello, > > We've tagged today (Jan 30) a new stable resource-agents release > (3.9.6) in the upstream repository. > > Big thanks go to all contributors! Needless to say, without you > this release would not be possible. Big thanks to Dejan. Who once again finally did, what I meant to do in late 2013 already, but simply pushed off for over a year (and no-one else stepped up, either...) So: Thank You. I just today noticed that apparently some resource agents accept and use parameters that are not documented in their meta data. I now came up with a bash two-liner, which likely still produces a lot of noise, because it does not take into account that some agents "source" additional helper files. But here is the list: --- used, but not described +++ described, but apparently not used. EvmsSCC +OCF_RESKEY_ignore_deprecation Evmsd +OCF_RESKEY_ignore_deprecation ?? intentionally undocumented ?? IPaddr +OCF_RESKEY_iflabel IPaddr -OCF_RESKEY_netmask Not sure. IPaddr2 -OCF_RESKEY_netmask intentional, backward compat, quoting the agent: # Note: We had a version out there for a while which used # netmask instead of cidr_netmask. Don't remove this aliasing code! Please help review these: IPsrcaddr -OCF_RESKEY_ip IPsrcaddr +OCF_RESKEY_cidr_netmask IPv6addr.c -OCF_RESKEY_cidr_netmask IPv6addr.c -OCF_RESKEY_ipv6addr IPv6addr.c -OCF_RESKEY_nic LinuxSCSI +OCF_RESKEY_ignore_deprecation Squid -OCF_RESKEY_squid_confirm_trialcount Squid -OCF_RESKEY_squid_opts Squid -OCF_RESKEY_squid_suspend_trialcount SysInfo -OCF_RESKEY_clone WAS6 -OCF_RESKEY_profileName apache +OCF_RESKEY_use_ipv6 conntrackd -OCF_RESKEY_conntrackd dnsupdate -OCF_RESKEY_opts dnsupdate +OCF_RESKEY_nsupdate_opts docker -OCF_RESKEY_container ethmonitor -OCF_RESKEY_check_level ethmonitor -OCF_RESKEY_multiplicator galera +OCF_RESKEY_additional_parameters galera +OCF_RESKEY_binary galera +OCF_RESKEY_client_binary galera +OCF_RESKEY_config galera +OCF_RESKEY_datadir galera +OCF_RESKEY_enable_creation galera +OCF_RESKEY_group galera +OCF_RESKEY_log galera +OCF_RESKEY_pid galera +OCF_RESKEY_socket galera +OCF_RESKEY_user Probably all bogus, it source "mysql-common.sh". Someone please have a more detailed look. iSCSILogicalUnit +OCF_RESKEY_product_id iSCSILogicalUnit +OCF_RESKEY_vendor_id false positive surprise: florian learned some wizardry back then ;-) for var in scsi_id scsi_sn vendor_id product_id; do envar="OCF_RESKEY_${var}" if [ -n "${!envar}" ]; then params="${params} ${var}=${!envar}" fi done If such magic is used elsewhere, that could mask "Used but not documented" cases. iface-bridge -OCF_RESKEY_multicast_querier !! Yep, that needs to be documented! mysql-proxy -OCF_RESKEY_group mysql-proxy -OCF_RESKEY_user Oops, apparently my magic scriptlet below needs to learn to ignore script comments... named -OCF_RESKEY_rootdir !! Probably a bug: named_rootdir is documented. nfsserver -OCF_RESKEY_nfs_notify_cmd !! Yep, that needs to be documented! nginx -OCF_RESKEY_client nginx +OCF_RESKEY_testclient !! client is used, but not documented, !! testclient is documented, but unused... Bug? nginx -OCF_RESKEY_nginx Bogus. Needs to be dropped from leading comment block. oracle -OCF_RESKEY_tns_admin !! Yep, that needs to be documented! pingd +OCF_RESKEY_ignore_deprecation ?? intentionally undocumented ?? pingd -OCF_RESKEY_update !! Yep, is undocumented. sg_persist +OCF_RESKEY_binary sg_persist -OCF_RESKEY_sg_persist_binary !! BUG? binary vs sg_persist_binary varnish -OCF_RESKEY_binary !! Yep, is undocumented. Please someone find the time to prepare pull requests to fix these... Thanks, Lars ----------------------------------------- List was generated by below scriptlet, which can be improved. The improved version should probably be part of a "unit test" check, when building resource-agents. # In the git checkout of the resource agents, # get a list of files that look like actual agent scripts. cd heartbeat A=$(git ls-files | xargs grep -s -l ' Is there a way for a resource agent to know the previous node on which it was active? Regards. Mark K Vallevand Mark.Vallevand at Unisys.com Outside of a dog, a book is man's best friend. Inside of a dog, its too dark to read. - Groucho THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From netplus.root at gmail.com Fri Feb 13 16:39:31 2015 From: netplus.root at gmail.com (Equipe R&S Netplus) Date: Fri, 13 Feb 2015 17:39:31 +0100 Subject: [Linux-cluster] NFS HA Message-ID: Hello, I would like to setup a cluster of NFS. With RHCS, I use the ressource agent "nfsserver". But I have a question : Is it possible manage a NFS server where NFS client "will not be aware of any loss of service" ? In other words, if the NFS service failover the NFS client don't see any change. Actually when there is a failover, I can't access to the NFS server anymore. Indeed, I had the message "Stale NFS file handle". In client NFS log : << NFS: server X.X.X.X error: fileid changed >> Is there any solution please ? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Vallevand at UNISYS.com Fri Feb 13 17:12:54 2015 From: Mark.Vallevand at UNISYS.com (Vallevand, Mark K) Date: Fri, 13 Feb 2015 17:12:54 +0000 Subject: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? In-Reply-To: <2537359766394f3f8036dd6b9a1f0510@US-EXCH13-5.na.uis.unisys.com> References: <2537359766394f3f8036dd6b9a1f0510@US-EXCH13-5.na.uis.unisys.com> Message-ID: I didn't see an environment variable with that information. Any other ways to determine this? Regards. Mark K Vallevand Mark.Vallevand at Unisys.com Outside of a dog, a book is man's best friend. Inside of a dog, its too dark to read. - Groucho THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Vallevand, Mark K Sent: Friday, February 13, 2015 10:16 AM To: linux clustering Subject: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? Is there a way for a resource agent to know the previous node on which it was active? Regards. Mark K Vallevand Mark.Vallevand at Unisys.com Outside of a dog, a book is man's best friend. Inside of a dog, its too dark to read. - Groucho THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emi2fast at gmail.com Fri Feb 13 17:16:59 2015 From: emi2fast at gmail.com (emmanuel segura) Date: Fri, 13 Feb 2015 18:16:59 +0100 Subject: [Linux-cluster] NFS HA In-Reply-To: References: Message-ID: I had the same problem with nfs ha with rhcs, i solved using udp when i mounted the shares in client side. 2015-02-13 17:39 GMT+01:00 Equipe R&S Netplus : > Hello, > > I would like to setup a cluster of NFS. > With RHCS, I use the ressource agent "nfsserver". > > But I have a question : > Is it possible manage a NFS server where NFS client "will not be aware of > any loss of service" ? In other words, if the NFS service failover the NFS > client don't see any change. > > Actually when there is a failover, I can't access to the NFS server anymore. > Indeed, I had the message "Stale NFS file handle". > In client NFS log : > << > NFS: server X.X.X.X error: fileid changed >>> > > Is there any solution please ? > Thank you. > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- esta es mi vida e me la vivo hasta que dios quiera From dan131riley at gmail.com Fri Feb 13 18:56:17 2015 From: dan131riley at gmail.com (Dan Riley) Date: Fri, 13 Feb 2015 13:56:17 -0500 Subject: [Linux-cluster] NFS HA In-Reply-To: References: Message-ID: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> On the original question, you need to specify the fsid for the file system. Otherwise you get an fsid that's derived in part from the device numbers, so different device numbers on the failover leads to a different fsid. wrt NFS over UDP, it isn't supported with NFSv4, and will lead to random hangs. At least for us, NFSv4 is a big enough win that we gave up NFS over UDP. -dan > On Feb 13, 2015, at 12:16, emmanuel segura wrote: > > I had the same problem with nfs ha with rhcs, i solved using udp when > i mounted the shares in client side. > > 2015-02-13 17:39 GMT+01:00 Equipe R&S Netplus : >> Hello, >> >> I would like to setup a cluster of NFS. >> With RHCS, I use the ressource agent "nfsserver". >> >> But I have a question : >> Is it possible manage a NFS server where NFS client "will not be aware of >> any loss of service" ? In other words, if the NFS service failover the NFS >> client don't see any change. >> >> Actually when there is a failover, I can't access to the NFS server anymore. >> Indeed, I had the message "Stale NFS file handle". >> In client NFS log : >> << >> NFS: server X.X.X.X error: fileid changed >>>> >> >> Is there any solution please ? >> Thank you. >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > > -- > esta es mi vida e me la vivo hasta que dios quiera > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From Colin.Simpson at iongeo.com Fri Feb 13 19:36:34 2015 From: Colin.Simpson at iongeo.com (Colin Simpson) Date: Fri, 13 Feb 2015 19:36:34 +0000 Subject: [Linux-cluster] NFS HA In-Reply-To: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> References: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> Message-ID: <1423856194.5249.110.camel@iongeo.com> Is there a good document on NFSv4 best practice on a failover cluster? Having "hard" mounting seems to allow failover to work for us. I'd rather not though as we have VPN laptop client machines that we'd rather didn't hang if the connection drops (maybe soft with a suitable timeo and retrans options would be good for these boxes). I want to turn up the security of my RHEL6 NFSv4 clusters to use Kerberos auth. But I read somewhere (may have been in a Bugazilla), that if you do this you can't have more than one NFS service running in the cluster, which we quite like just now for load balancing between the nodes. The original NFS cluster cookbook, really helped me get this going, but is from the RHEL4 era (so NFSv3 and not Kerberized). Or is there a new one somewhere.... Thanks Colin On Fri, 2015-02-13 at 13:56 -0500, Dan Riley wrote: > On the original question, you need to specify the fsid for the > file system. Otherwise you get an fsid that's derived in part > from the device numbers, so different device numbers on the > failover leads to a different fsid. > > wrt NFS over UDP, it isn't supported with NFSv4, and will lead to > random hangs. At least for us, NFSv4 is a big enough win that we > gave up NFS over UDP. > > -dan > > ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. From netplus.root at gmail.com Fri Feb 13 21:16:15 2015 From: netplus.root at gmail.com (Equipe R&S Netplus) Date: Fri, 13 Feb 2015 22:16:15 +0100 Subject: [Linux-cluster] NFS HA In-Reply-To: <1423856194.5249.110.camel@iongeo.com> References: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> <1423856194.5249.110.camel@iongeo.com> Message-ID: > I had the same problem with nfs ha with rhcs, i solved using udp when > i mounted the shares in client side. Thank you, but I prefer a TCP NFS mount. > On the original question, you need to specify the fsid for the > file system. Otherwise you get an fsid that's derived in part > from the device numbers, so different device numbers on the > failover leads to a different fsid. For my test, I specify a fsid for all nfs mount, it doesn't seems to be the root problem. Example : << /exports *(rw,fsid=0,insecure,no_subtree_check) /exports/test 192.168.0.0/24(rw,nohide,fsid=1,insecure,no_subtree_check,async) >> > Having "hard" mounting seems to allow failover to work for us. > I'd rather not though as we have VPN laptop client machines that we'd > rather didn't hang if the connection drops (maybe soft with a suitable > timeo and retrans options would be good for these boxes). What configuration do you use on "hard" mounting to allow failover NFS service export ? Thank you for your response. -------------- next part -------------- An HTML attachment was scrubbed... URL: From misch at schwartzkopff.org Sat Feb 14 07:57:31 2015 From: misch at schwartzkopff.org (Michael Schwartzkopff) Date: Sat, 14 Feb 2015 08:57:31 +0100 Subject: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? In-Reply-To: References: <2537359766394f3f8036dd6b9a1f0510@US-EXCH13-5.na.uis.unisys.com> Message-ID: <6721998.2hbWCMOiuH@nb003> Am Freitag, 13. Februar 2015, 17:12:54 schrieb Vallevand, Mark K: > I didn't see an environment variable with that information. > Any other ways to determine this? Your could re-write your resource agent to ist the crm_attribute command. This adds attributes for a resoruce to the CIB. This attribute could hold the information about the node the resource is running on. It would have to be updated by your resource agent (script) if the resource migrates. -- Dr. Michael Schwartzkopff Guardinistr. 63 81375 M?nchen Tel: (0162) 1650044 Fax: (089) 620 304 13 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 230 bytes Desc: This is a digitally signed message part. URL: From misch at schwartzkopff.org Sat Feb 14 08:02:09 2015 From: misch at schwartzkopff.org (Michael Schwartzkopff) Date: Sat, 14 Feb 2015 09:02:09 +0100 Subject: [Linux-cluster] NFS HA In-Reply-To: References: Message-ID: <8188824.6S4ojBTNoA@nb003> Am Freitag, 13. Februar 2015, 17:39:31 schrieb Equipe R&S Netplus: > Hello, > > I would like to setup a cluster of NFS. > With RHCS, I use the ressource agent "nfsserver". > > But I have a question : > Is it possible manage a NFS server where NFS client "will not be aware of > any loss of service" ? In other words, if the NFS service failover the NFS > client don't see any change. > > Actually when there is a failover, I can't access to the NFS server anymore. > Indeed, I had the message "Stale NFS file handle". > In client NFS log : > << > NFS: server X.X.X.X error: fileid changed > > > Is there any solution please ? > Thank you. Be sure that you have a virtual IP adress that migrates together with the NFS server in the cluster. Timeout / NFS Server migration is a problem in NFSv3. The server should inform the clients that it rebooted and the clients should reclaim their locks. This locking is much more elegant in NFSv4. So in clustered NFS Server you should use v4. See the resource agent description for locking timeouts. TCP or UDP is just a transport protocol and not important for the problems you describe. -- Dr. Michael Schwartzkopff Guardinistr. 63 81375 M?nchen Tel: (0162) 1650044 Fax: (089) 620 304 13 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 230 bytes Desc: This is a digitally signed message part. URL: From Colin.Simpson at iongeo.com Mon Feb 16 11:19:06 2015 From: Colin.Simpson at iongeo.com (Colin Simpson) Date: Mon, 16 Feb 2015 11:19:06 +0000 Subject: [Linux-cluster] NFS HA In-Reply-To: References: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> <1423856194.5249.110.camel@iongeo.com> Message-ID: <1424085546.26133.25.camel@iongeo.com> Nothing special In resources, Then a service, Obviously you'll need to tune this for your use case. So a floating service IP address. Client mount options are just "rw,hard,intr", with a DNS name associated to this service IP address. Thanks Colin On Fri, 2015-02-13 at 22:16 +0100, Equipe R&S Netplus wrote: > > What configuration do you use on "hard" mounting to allow failover NFS > service export ? > > > Thank you for your response. > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster ________________________________ This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed. If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original. From dan131riley at gmail.com Mon Feb 16 14:40:31 2015 From: dan131riley at gmail.com (Dan Riley) Date: Mon, 16 Feb 2015 09:40:31 -0500 Subject: [Linux-cluster] NFS HA In-Reply-To: References: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> <1423856194.5249.110.camel@iongeo.com> Message-ID: <4931B47F-5EBC-44BF-A384-3298FFC88D7F@gmail.com> > On Feb 13, 2015, at 16:16, Equipe R&S Netplus wrote: > > On the original question, you need to specify the fsid for the > > file system. Otherwise you get an fsid that's derived in part > > from the device numbers, so different device numbers on the > > failover leads to a different fsid. > > For my test, I specify a fsid for all nfs mount, it doesn't seems to be the root problem. > Example : > << > /exports *(rw,fsid=0,insecure,no_subtree_check) > /exports/test 192.168.0.0/24(rw,nohide,fsid=1,insecure,no_subtree_check,async) > >> How are you managing the failover? If you are doing failover of the file system (e.g., via SAN), then I'd expect the HA service manager (rgmanager, pacemaker, etc.) to handle the export, since you don't want to do the export until the file system is mounted. If you are exporting a replica file system, then I believe it has to be a block level replica (e.g., DRDB)--a file system replica via rsync or such will have different inode numbers, and AIR the inode number shows up in the NFS file handle. If you're doing something different, you'll need to give us details. -dan From Mark.Vallevand at UNISYS.com Mon Feb 16 14:55:26 2015 From: Mark.Vallevand at UNISYS.com (Vallevand, Mark K) Date: Mon, 16 Feb 2015 14:55:26 +0000 Subject: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? In-Reply-To: <6721998.2hbWCMOiuH@nb003> References: <2537359766394f3f8036dd6b9a1f0510@US-EXCH13-5.na.uis.unisys.com> <6721998.2hbWCMOiuH@nb003> Message-ID: <636a30e6138c41f69966e854004a5adb@US-EXCH13-5.na.uis.unisys.com> Thank you. Regards. Mark K Vallevand Mark.Vallevand at Unisys.com Outside of a dog, a book is man's best friend. Inside of a dog, its too dark to read. - Groucho THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. -----Original Message----- From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Michael Schwartzkopff Sent: Saturday, February 14, 2015 01:58 AM To: linux clustering Subject: Re: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? Am Freitag, 13. Februar 2015, 17:12:54 schrieb Vallevand, Mark K: > I didn't see an environment variable with that information. > Any other ways to determine this? Your could re-write your resource agent to ist the crm_attribute command. This adds attributes for a resoruce to the CIB. This attribute could hold the information about the node the resource is running on. It would have to be updated by your resource agent (script) if the resource migrates. -- Dr. Michael Schwartzkopff Guardinistr. 63 81375 M?nchen Tel: (0162) 1650044 Fax: (089) 620 304 13 From netplus.root at gmail.com Tue Feb 17 16:54:46 2015 From: netplus.root at gmail.com (Netplus) Date: Tue, 17 Feb 2015 17:54:46 +0100 Subject: [Linux-cluster] NFS HA In-Reply-To: <4931B47F-5EBC-44BF-A384-3298FFC88D7F@gmail.com> References: <97DDAB42-F7C9-4CC3-B96E-77D3997AC025@gmail.com> <1423856194.5249.110.camel@iongeo.com> <4931B47F-5EBC-44BF-A384-3298FFC88D7F@gmail.com> Message-ID: > This locking is much more elegant in NFSv4. So in clustered NFS Server you > should use v4. See the resource agent description for locking timeouts. OK, I follow your advice. > Obviously you'll need to tune this for your use case. > > So a floating service IP address. Client mount options are just > "rw,hard,intr", with a DNS name associated to this service IP address. Thank you Colin for your example. A comment about NFSv4, with RHCS on CentOS 6 I have error when I use ressource "nfsexport". It seems that "nfsserver" is better in that case. > How are you managing the failover? If you are doing failover of the > file system (e.g., via SAN), then I'd expect the HA service manager > (rgmanager, pacemaker, etc.) to handle the export, since you don't > want to do the export until the file system is mounted. If you are > exporting a replica file system, then I believe it has to be a block > level replica (e.g., DRDB)--a file system replica via rsync or such > will have different inode numbers, and AIR the inode number shows up > in the NFS file handle. I was on the second case (file system replica) so that's why I have the error about the ID. Thank you very much for your light ! I use a centralized file system to resolve this. I think it's the only way to permit transparent NFS failover ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at beekhof.net Wed Feb 18 02:00:37 2015 From: andrew at beekhof.net (Andrew Beekhof) Date: Wed, 18 Feb 2015 13:00:37 +1100 Subject: [Linux-cluster] Is there a way for a resource agent to know the previous node on which it was active? In-Reply-To: <6721998.2hbWCMOiuH@nb003> References: <2537359766394f3f8036dd6b9a1f0510@US-EXCH13-5.na.uis.unisys.com> <6721998.2hbWCMOiuH@nb003> Message-ID: <44ABC049-9DAE-4A70-8C82-4BF4F009BD9D@beekhof.net> > On 14 Feb 2015, at 6:57 pm, Michael Schwartzkopff wrote: > > Am Freitag, 13. Februar 2015, 17:12:54 schrieb Vallevand, Mark K: >> I didn't see an environment variable with that information. >> Any other ways to determine this? > > Your could re-write your resource agent to ist the crm_attribute command. This > adds attributes for a resoruce to the CIB. This attribute could hold the > information about the node the resource is running on. Somewhat dangerous to rely on though. What would be the reason for needing to know the previous node? > > It would have to be updated by your resource agent (script) if the resource > migrates. > > -- > Dr. Michael Schwartzkopff > Guardinistr. 63 > 81375 M?nchen > > Tel: (0162) 1650044 > Fax: (089) 620 304 13-- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From nagemnna at gmail.com Thu Feb 19 13:50:38 2015 From: nagemnna at gmail.com (Megan .) Date: Thu, 19 Feb 2015 08:50:38 -0500 Subject: [Linux-cluster] Number of GFS2 mounts Message-ID: Good Morning! We have an 11 node Centos 6.6 cluster configuration. We are using it to share SAN mounts between servers (GFS2 via iscsi with LVM). We have a requirement to have 33 GFS2 mounts shared on the cluster (crazy i know). Are there any limitations on doing this? I couldn't find anything in the documentation about number of mounts, just the size of the mounts. Is there anything I can do to tune our cluster to handle this requirement? Thanks! From swhiteho at redhat.com Thu Feb 19 14:00:49 2015 From: swhiteho at redhat.com (Steven Whitehouse) Date: Thu, 19 Feb 2015 14:00:49 +0000 Subject: [Linux-cluster] Number of GFS2 mounts In-Reply-To: References: Message-ID: <54E5EC91.8090505@redhat.com> Hi, On 19/02/15 13:50, Megan . wrote: > Good Morning! > > We have an 11 node Centos 6.6 cluster configuration. We are using it > to share SAN mounts between servers (GFS2 via iscsi with LVM). We > have a requirement to have 33 GFS2 mounts shared on the cluster (crazy > i know). Are there any limitations on doing this? I couldn't find > anything in the documentation about number of mounts, just the size of > the mounts. Is there anything I can do to tune our cluster to handle > this requirement? > > Thanks! > I don't think there should be any problem at that level, 33 mounts is not that many, Steve. From debjyoti.mail at gmail.com Thu Feb 19 14:04:07 2015 From: debjyoti.mail at gmail.com (Debjyoti Banerjee) Date: Thu, 19 Feb 2015 19:34:07 +0530 Subject: [Linux-cluster] Number of GFS2 mounts In-Reply-To: References: Message-ID: You have to configure CLVM in that case.... On Feb 19, 2015 7:31 PM, "Megan ." wrote: > Good Morning! > > We have an 11 node Centos 6.6 cluster configuration. We are using it > to share SAN mounts between servers (GFS2 via iscsi with LVM). We > have a requirement to have 33 GFS2 mounts shared on the cluster (crazy > i know). Are there any limitations on doing this? I couldn't find > anything in the documentation about number of mounts, just the size of > the mounts. Is there anything I can do to tune our cluster to handle > this requirement? > > Thanks! > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nagemnna at gmail.com Thu Feb 19 14:32:14 2015 From: nagemnna at gmail.com (Megan .) Date: Thu, 19 Feb 2015 09:32:14 -0500 Subject: [Linux-cluster] Number of GFS2 mounts In-Reply-To: References: Message-ID: We are using CLVM. Is there something special we need to do? Any tuning links/advice? Or will out of the box work fine for us? On Thu, Feb 19, 2015 at 9:04 AM, Debjyoti Banerjee wrote: > You have to configure CLVM in that case.... > > On Feb 19, 2015 7:31 PM, "Megan ." wrote: >> >> Good Morning! >> >> We have an 11 node Centos 6.6 cluster configuration. We are using it >> to share SAN mounts between servers (GFS2 via iscsi with LVM). We >> have a requirement to have 33 GFS2 mounts shared on the cluster (crazy >> i know). Are there any limitations on doing this? I couldn't find >> anything in the documentation about number of mounts, just the size of >> the mounts. Is there anything I can do to tune our cluster to handle >> this requirement? >> >> Thanks! >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From emi2fast at gmail.com Thu Feb 19 16:52:45 2015 From: emi2fast at gmail.com (emmanuel segura) Date: Thu, 19 Feb 2015 17:52:45 +0100 Subject: [Linux-cluster] Number of GFS2 mounts In-Reply-To: References: Message-ID: you need to be sure your cluster fencing is working fine. 2015-02-19 15:32 GMT+01:00 Megan . : > We are using CLVM. Is there something special we need to do? Any > tuning links/advice? Or will out of the box work fine for us? > > On Thu, Feb 19, 2015 at 9:04 AM, Debjyoti Banerjee > wrote: >> You have to configure CLVM in that case.... >> >> On Feb 19, 2015 7:31 PM, "Megan ." wrote: >>> >>> Good Morning! >>> >>> We have an 11 node Centos 6.6 cluster configuration. We are using it >>> to share SAN mounts between servers (GFS2 via iscsi with LVM). We >>> have a requirement to have 33 GFS2 mounts shared on the cluster (crazy >>> i know). Are there any limitations on doing this? I couldn't find >>> anything in the documentation about number of mounts, just the size of >>> the mounts. Is there anything I can do to tune our cluster to handle >>> this requirement? >>> >>> Thanks! >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster -- esta es mi vida e me la vivo hasta que dios quiera From andrew at beekhof.net Sun Feb 22 20:15:42 2015 From: andrew at beekhof.net (Andrew Beekhof) Date: Mon, 23 Feb 2015 07:15:42 +1100 Subject: [Linux-cluster] git repo of fence-virt In-Reply-To: <20150209131424.GA12489@redhat.com> References: <20150209.171912.399832125393796182.yamato@redhat.com> <20150209131424.GA12489@redhat.com> Message-ID: <28403ACB-026C-404A-9CCA-8B9CE03A705C@beekhof.net> Ryan, since we're trying to consolidate everything else in the clusterlabs org area of github, perhaps consider moving it there. You'll still have complete control over it. > On 10 Feb 2015, at 12:14 am, Ryan McCabe wrote: > > On Mon, Feb 09, 2015 at 05:19:12PM +0900, Masatake YAMATO wrote: >> Hi, >> >> Before sending a PR(https://github.com/ryan-mccabe/fence-virt/pull/2) to you, >> I have to search the git repository for fence-virt again and again. >> >> Though http://sourceforge.net/p/fence-virt/code/ci/master/tree/ is not >> maintained, it is advertised in https://fedorahosted.org/cluster/wiki/FenceVirt. >> >> Could you update these old URL to the newer one(https://github.com/ryan-mccabe/fence-virt)? >> So people who got interests to fence-virt can work on the latest code. > > Hi, > > Thanks. I didn't realize it was pointing to the old repo. I'll get that > corrected, and I'll see if Lon can take down the old sourceforge site. > > > Thanks, > > Ryan > updated and I'll see if maybe Lon can remove the > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From andrew at beekhof.net Sun Feb 22 20:18:11 2015 From: andrew at beekhof.net (Andrew Beekhof) Date: Mon, 23 Feb 2015 07:18:11 +1100 Subject: [Linux-cluster] fence-agents-4.0.15 stable release In-Reply-To: <54D8C578.7030301@redhat.com> References: <54D8C578.7030301@redhat.com> Message-ID: Same as my message to Ryan :-) In line with what we agreed at the summit, can we begin the process of migrating fence-agent to https://github.com/ClusterLabs ? You'll still have complete control and there is of course no need for github to be the only copy. > On 10 Feb 2015, at 1:34 am, Marek marx Grac wrote: > > Welcome to the fence-agents 4.0.15 release > > This release includes several bugfixes: > > * Tripp Lite PDUs are now supported by fence_tripplite_snmp (symlink to fence_apc_snmp) > * Default values in metadata sometimes differ to those actually used, this is fixed now > * improvements in testing > > The new source tarball can be downloaded here: > > https://fedorahosted.org/releases/f/e/fence-agents/fence-agents-4.0.15.tar.xz > > To report bugs or issues: > > https://bugzilla.redhat.com/ > > Would you like to meet the cluster team or members of its community? > > Join us on IRC (irc.freenode.net #linux-cluster) and share your > experience with other sysadministrators or power users. > > Thanks/congratulations to all people that contributed to achieve this > great milestone. > > m, > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster From mgrac at redhat.com Mon Feb 23 09:25:42 2015 From: mgrac at redhat.com (Marek "marx" Grac) Date: Mon, 23 Feb 2015 10:25:42 +0100 Subject: [Linux-cluster] fence-agents-4.0.15 stable release In-Reply-To: References: <54D8C578.7030301@redhat.com> Message-ID: <54EAF216.2030109@redhat.com> On 02/22/2015 09:18 PM, Andrew Beekhof wrote: > Same as my message to Ryan :-) > In line with what we agreed at the summit, can we begin the process of migrating fence-agent to https://github.com/ClusterLabs ? > > You'll still have complete control and there is of course no need for github to be the only copy. ok, I will work on that for next release m, > >> On 10 Feb 2015, at 1:34 am, Marek marx Grac wrote: >> >> Welcome to the fence-agents 4.0.15 release >> >> This release includes several bugfixes: >> >> * Tripp Lite PDUs are now supported by fence_tripplite_snmp (symlink to fence_apc_snmp) >> * Default values in metadata sometimes differ to those actually used, this is fixed now >> * improvements in testing >> >> The new source tarball can be downloaded here: >> >> https://fedorahosted.org/releases/f/e/fence-agents/fence-agents-4.0.15.tar.xz >> >> To report bugs or issues: >> >> https://bugzilla.redhat.com/ >> >> Would you like to meet the cluster team or members of its community? >> >> Join us on IRC (irc.freenode.net #linux-cluster) and share your >> experience with other sysadministrators or power users. >> >> Thanks/congratulations to all people that contributed to achieve this >> great milestone. >> >> m, >> >> -- >> Linux-cluster mailing list >> Linux-cluster at redhat.com >> https://www.redhat.com/mailman/listinfo/linux-cluster > From rmccabe at redhat.com Mon Feb 23 14:29:35 2015 From: rmccabe at redhat.com (Ryan McCabe) Date: Mon, 23 Feb 2015 09:29:35 -0500 Subject: [Linux-cluster] git repo of fence-virt In-Reply-To: <28403ACB-026C-404A-9CCA-8B9CE03A705C@beekhof.net> References: <20150209.171912.399832125393796182.yamato@redhat.com> <20150209131424.GA12489@redhat.com> <28403ACB-026C-404A-9CCA-8B9CE03A705C@beekhof.net> Message-ID: <20150223142934.GA766919@redhat.com> On Mon, Feb 23, 2015 at 07:15:42AM +1100, Andrew Beekhof wrote: > Ryan, since we're trying to consolidate everything else in the clusterlabs org area of github, perhaps consider moving it there. > You'll still have complete control over it. > Sure, works for me. Could you or somebody else who has the access add me to the clusterlabs org on github? Ryan From andrew at beekhof.net Mon Feb 23 20:13:05 2015 From: andrew at beekhof.net (Andrew Beekhof) Date: Tue, 24 Feb 2015 07:13:05 +1100 Subject: [Linux-cluster] [Cluster-devel] git repo of fence-virt In-Reply-To: <20150223142934.GA766919@redhat.com> References: <20150209.171912.399832125393796182.yamato@redhat.com> <20150209131424.GA12489@redhat.com> <28403ACB-026C-404A-9CCA-8B9CE03A705C@beekhof.net> <20150223142934.GA766919@redhat.com> Message-ID: <41143705-C6AB-4A65-9EA2-DD84532D4D10@beekhof.net> > On 24 Feb 2015, at 1:29 am, Ryan McCabe wrote: > > On Mon, Feb 23, 2015 at 07:15:42AM +1100, Andrew Beekhof wrote: >> Ryan, since we're trying to consolidate everything else in the clusterlabs org area of github, perhaps consider moving it there. >> You'll still have complete control over it. >> > > Sure, works for me. Could you or somebody else who has the access add me > to the clusterlabs org on github? done :) From andrew at beekhof.net Mon Feb 23 20:23:05 2015 From: andrew at beekhof.net (Andrew Beekhof) Date: Tue, 24 Feb 2015 07:23:05 +1100 Subject: [Linux-cluster] fence-agents-4.0.15 stable release In-Reply-To: <54EAF216.2030109@redhat.com> References: <54D8C578.7030301@redhat.com> <54EAF216.2030109@redhat.com> Message-ID: > On 23 Feb 2015, at 8:25 pm, Marek marx Grac wrote: > > > On 02/22/2015 09:18 PM, Andrew Beekhof wrote: >> Same as my message to Ryan :-) >> In line with what we agreed at the summit, can we begin the process of migrating fence-agent to https://github.com/ClusterLabs ? >> >> You'll still have complete control and there is of course no need for github to be the only copy. > ok, I will work on that for next release ryan just moved over fence-virt it seems all i need is your github username (to add you to the clusterlabs org) and you can create/move the new repo whenever it is convenient > > m, > >> >>> On 10 Feb 2015, at 1:34 am, Marek marx Grac wrote: >>> >>> Welcome to the fence-agents 4.0.15 release >>> >>> This release includes several bugfixes: >>> >>> * Tripp Lite PDUs are now supported by fence_tripplite_snmp (symlink to fence_apc_snmp) >>> * Default values in metadata sometimes differ to those actually used, this is fixed now >>> * improvements in testing >>> >>> The new source tarball can be downloaded here: >>> >>> https://fedorahosted.org/releases/f/e/fence-agents/fence-agents-4.0.15.tar.xz >>> >>> To report bugs or issues: >>> >>> https://bugzilla.redhat.com/ >>> >>> Would you like to meet the cluster team or members of its community? >>> >>> Join us on IRC (irc.freenode.net #linux-cluster) and share your >>> experience with other sysadministrators or power users. >>> >>> Thanks/congratulations to all people that contributed to achieve this >>> great milestone. >>> >>> m, >>> >>> -- >>> Linux-cluster mailing list >>> Linux-cluster at redhat.com >>> https://www.redhat.com/mailman/listinfo/linux-cluster >> > > -- > Linux-cluster mailing list > Linux-cluster at redhat.com > https://www.redhat.com/mailman/listinfo/linux-cluster