[Linux-cluster] centos5 to RHEL6 migration
rajatjpatel
rajatjpatel at gmail.com
Mon Jan 9 10:20:19 UTC 2012
1. Back up anything you care about.
2.
Remember - A fresh install is generally *strongly* preferred over an
upgrade.
Regards,
Rajat Patel
http://studyhat.blogspot.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...
On Mon, Jan 9, 2012 at 9:33 AM, <linux-cluster-request at redhat.com> wrote:
> Send Linux-cluster mailing list submissions to
> linux-cluster at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
> linux-cluster-request at redhat.com
>
> You can reach the person managing the list at
> linux-cluster-owner at redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
>
>
> Today's Topics:
>
> 1. centos5 to RHEL6 migration (Terry)
> 2. Re: centos5 to RHEL6 migration (Michael Allen)
> 3. Re: centos5 to RHEL6 migration (Digimer)
> 4. Re: centos5 to RHEL6 migration (Terry)
> 5. Re: centos5 to RHEL6 migration (Fajar A. Nugraha)
> 6. Re: GFS on CentOS - cman unable to start (Wes Modes)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sun, 8 Jan 2012 17:39:37 -0600
> From: Terry <td3201 at gmail.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: [Linux-cluster] centos5 to RHEL6 migration
> Message-ID:
> <CAHSRzpBvTbGU22JqVC0NCm7ETe84egB7rEkp22wWjEyZYA_T-Q at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello,
>
> I am trying to gently migrate a 3-node cluster from centos5 to RHEL6. I
> have already taken one of the three nodes out and rebuilt it. My thinking
> is to build a new cluster from the RHEL node but want to run it by everyone
> here first. The cluster consists of a handful of NFS volumes and a
> PostgreSQL database. I am not concerned about the database. I am moving
> to a new version and will simply migrate that. I am more concerned about
> all of the ext4 clustered LVM volumes. In this process, if I shut down the
> old cluster, what's the process to force the new node to read those volumes
> in to the new single-node cluster? A pvscan on the new server shows all of
> the volumes fine. I am concerned there's something else I'll have to do
> here to begin mounting these volumes in the new cluster.
> [root at server ~]# pvdisplay
> Skipping clustered volume group vg_data01b
>
> Thanks!
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20120108/bc344718/attachment.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Sun, 8 Jan 2012 19:36:10 -0600
> From: Michael Allen <michael.allen at visi.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] centos5 to RHEL6 migration
> Message-ID: <20120108193610.2404b425 at godelsrevenge.induswx.com>
> Content-Type: text/plain; charset=US-ASCII
>
> On Sun, 8 Jan 2012 17:39:37 -0600
> Terry <td3201 at gmail.com> wrote:
>
> > Hello,
> >
> > I am trying to gently migrate a 3-node cluster from centos5 to
> > RHEL6. I have already taken one of the three nodes out and rebuilt
> > it. My thinking is to build a new cluster from the RHEL node but want
> > to run it by everyone here first. The cluster consists of a handful
> > of NFS volumes and a PostgreSQL database. I am not concerned about
> > the database. I am moving to a new version and will simply migrate
> > that. I am more concerned about all of the ext4 clustered LVM
> > volumes. In this process, if I shut down the old cluster, what's the
> > process to force the new node to read those volumes in to the new
> > single-node cluster? A pvscan on the new server shows all of the
> > volumes fine. I am concerned there's something else I'll have to do
> > here to begin mounting these volumes in the new cluster. [root at server
> > ~]# pvdisplay Skipping clustered volume group vg_data01b
> >
> > Thanks!
> This message comes at a good time for me, too, since I am considering
> the same thing. I have 10 nodes but it appears that a change to CentOS
> 6.xx is about due.
>
> Michael Allen
>
>
>
> ------------------------------
>
> Message: 3
> Date: Sun, 08 Jan 2012 21:38:34 -0500
> From: Digimer <linux at alteeve.com>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] centos5 to RHEL6 migration
> Message-ID: <4F0A532A.2000202 at alteeve.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> On 01/08/2012 06:39 PM, Terry wrote:
> > Hello,
> >
> > I am trying to gently migrate a 3-node cluster from centos5 to RHEL6. I
> > have already taken one of the three nodes out and rebuilt it. My
> > thinking is to build a new cluster from the RHEL node but want to run it
> > by everyone here first. The cluster consists of a handful of NFS volumes
> > and a PostgreSQL database. I am not concerned about the database. I am
> > moving to a new version and will simply migrate that. I am more
> > concerned about all of the ext4 clustered LVM volumes. In this process,
> > if I shut down the old cluster, what's the process to force the new node
> > to read those volumes in to the new single-node cluster? A pvscan on
> > the new server shows all of the volumes fine. I am concerned there's
> > something else I'll have to do here to begin mounting these volumes in
> > the new cluster.
> > [root at server ~]# pvdisplay
> > Skipping clustered volume group vg_data01b
> >
> > Thanks!
>
> Technically yes, practically no. Or rather, not without a lot of
> testing first.
>
> I've never done this, but here are some pointers;
>
> <cman upgrading="yes" disallowed="1" ...>
>
> upgrading
> Set this if you are performing a rolling upgrade of the cluster
> between major releases.
>
> disallowed
> Set this to 1 enable cman's Disallowed mode. This is usually
> only needed for backwards compatibility.
>
> <group groupd_compat="1" />
>
> Enable compatibility with cluster2 nodes. groupd(8)
>
> There may be some other things you need to do as well. Please be sure
> to do proper testing and, if you have the budget, hire Red Hat to advise
> on this process. Also, please report back your results. It would help me
> help others in the same boat later. :)
>
> --
> Digimer
> E-Mail: digimer at alteeve.com
> Freenode handle: digimer
> Papers and Projects: http://alteeve.com
> Node Assassin: http://nodeassassin.org
> "omg my singularity battery is dead again.
> stupid hawking radiation." - epitron
>
>
>
> ------------------------------
>
> Message: 4
> Date: Sun, 8 Jan 2012 21:31:38 -0600
> From: Terry <td3201 at gmail.com>
> To: Digimer <linux at alteeve.com>
> Cc: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] centos5 to RHEL6 migration
> Message-ID:
> <CAHSRzpCHH65E4Qi2mcuCE=6RpVyZnHQGjYT7hrg7M6Bs=Wyv7Q at mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> If it's not practical, am I left with building a new cluster from scratch?
>
> On Sun, Jan 8, 2012 at 8:38 PM, Digimer <linux at alteeve.com> wrote:
>
> > On 01/08/2012 06:39 PM, Terry wrote:
> > > Hello,
> > >
> > > I am trying to gently migrate a 3-node cluster from centos5 to RHEL6.
> I
> > > have already taken one of the three nodes out and rebuilt it. My
> > > thinking is to build a new cluster from the RHEL node but want to run
> it
> > > by everyone here first. The cluster consists of a handful of NFS
> volumes
> > > and a PostgreSQL database. I am not concerned about the database. I
> am
> > > moving to a new version and will simply migrate that. I am more
> > > concerned about all of the ext4 clustered LVM volumes. In this
> process,
> > > if I shut down the old cluster, what's the process to force the new
> node
> > > to read those volumes in to the new single-node cluster? A pvscan on
> > > the new server shows all of the volumes fine. I am concerned there's
> > > something else I'll have to do here to begin mounting these volumes in
> > > the new cluster.
> > > [root at server ~]# pvdisplay
> > > Skipping clustered volume group vg_data01b
> > >
> > > Thanks!
> >
> > Technically yes, practically no. Or rather, not without a lot of
> > testing first.
> >
> > I've never done this, but here are some pointers;
> >
> > <cman upgrading="yes" disallowed="1" ...>
> >
> > upgrading
> > Set this if you are performing a rolling upgrade of the cluster
> > between major releases.
> >
> > disallowed
> > Set this to 1 enable cman's Disallowed mode. This is usually
> > only needed for backwards compatibility.
> >
> > <group groupd_compat="1" />
> >
> > Enable compatibility with cluster2 nodes. groupd(8)
> >
> > There may be some other things you need to do as well. Please be sure
> > to do proper testing and, if you have the budget, hire Red Hat to advise
> > on this process. Also, please report back your results. It would help me
> > help others in the same boat later. :)
> >
> > --
> > Digimer
> > E-Mail: digimer at alteeve.com
> > Freenode handle: digimer
> > Papers and Projects: http://alteeve.com
> > Node Assassin: http://nodeassassin.org
> > "omg my singularity battery is dead again.
> > stupid hawking radiation." - epitron
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20120108/e8917278/attachment.html
> >
>
> ------------------------------
>
> Message: 5
> Date: Mon, 9 Jan 2012 11:01:06 +0700
> From: "Fajar A. Nugraha" <list at fajar.net>
> To: linux clustering <linux-cluster at redhat.com>
> Subject: Re: [Linux-cluster] centos5 to RHEL6 migration
> Message-ID:
> <CAG1y0sdjqaV4nCHfOHy0b9moQYe+BdzkMBYDRT7UuRVAqkSqag at mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Mon, Jan 9, 2012 at 10:31 AM, Terry <td3201 at gmail.com> wrote:
> > If it's not practical, am I left with building a new cluster from
> scratch?
>
> I'm pretty sure if your ONLY problem is "Skipping clustered volume
> group vg_data01b", you can just turn off cluster flag with "vgchange
> -cn", then use "-o lock_nolock" to mount it on a SINGLE (i.e. not
> cluster) node. That was your original question, wasn't it?
>
> As for upgrading, I haven't tested it. You should be able to use your
> old storage, but just create other settings from scratch. Like Digimer
> said, be sure to do proper testing :)
>
> --
> Fajar
>
>
>
> ------------------------------
>
> Message: 6
> Date: Sun, 08 Jan 2012 20:03:18 -0800
> From: Wes Modes <wmodes at ucsc.edu>
> To: linux-cluster at redhat.com
> Subject: Re: [Linux-cluster] GFS on CentOS - cman unable to start
> Message-ID: <4F0A6706.6090308 at ucsc.edu>
> Content-Type: text/plain; charset="iso-8859-1"
>
> The behavior of cman's resolving of cluster node names is less than
> clear, as per the RHEL bugzilla report.
>
> The hostname and cluster.conf match, as does /etc/hosts and uname -n.
> The short names and FQDN ping. I believe all the node cluster.conf are
> in sync, and all nodes are accessible to each other using either short
> or long names.
>
> You'll have to trust that I've tried everything obvious, and every
> possible combination of FQDN and short names in cluster.conf and
> hostname. That said, it is totally possible I missed something obvious.
>
> I suspect, there is something else going on and I don't know how to get
> at it.
>
> Wes
>
>
> On 1/6/2012 6:06 PM, Kevin Stanton wrote:
> >
> > > Hi,
> >
> > > I think CMAN expect that the names of the cluster nodes be the same
> > returned by the command "uname -n".
> >
> > > For what you write your nodes hostnames are: test01.gdao.ucsc.edu
> > and test02.gdao.ucsc.edu, but in cluster.conf you have declared only
> > "test01" and "test02".
> >
> >
> >
> > I haven't found this to be the case in the past. I actually use a
> > separate short name to reference each node which is different than the
> > hostname of the server itself. All I've ever had to do is make sure
> > it resolves correctly. You can do this either in DNS and/or in
> > /etc/hosts. I have found that it's a good idea to do both in case
> > your DNS server is a virtual machine and is not running for some
> > reason. In that case with /etc/hosts you can still start cman.
> >
> >
> >
> > I would make sure whatever node names you use in the cluster.conf will
> > resolve when you try to ping it from all nodes in the cluster. Also
> > make sure your cluster.conf is in sync between all nodes.
> >
> >
> >
> > -Kevin
> >
> >
> >
> >
> >
> > ------------------------------------------------------------------------
> >
> > These servers are currently on the same host, but may not be in
> > the future. They are in a vm cluster (though honestly, I'm not
> > sure what this means yet).
> >
> > SElinux is on, but disabled.
> > Firewalling through iptables is turned off via
> > system-config-securitylevel
> >
> > There is no line currently in the cluster.conf that deals with
> > multicasting.
> >
> > Any other suggestions?
> >
> > Wes
> >
> > On 1/6/2012 12:05 PM, Luiz Gustavo Tonello wrote:
> >
> > Hi,
> >
> >
> >
> > This servers is on VMware? At the same host?
> >
> > SElinux is disable? iptables have something?
> >
> >
> >
> > In my environment I had a problem to start GFS2 with servers in
> > differents hosts.
> >
> > To clustering servers, was need migrate one server to the same
> > host of the other, and restart this.
> >
> >
> >
> > I think, one of the problem was because the virtual switchs.
> >
> > To solve, I changed a multicast IP, to use 225.0.0.13 at cluster.conf
> >
> > <multicast addr="225.0.0.13"/>
> >
> > And add a static route in both, to use default gateway.
> >
> >
> >
> > I don't know if it's correct, but this solve my problem.
> >
> >
> >
> > I hope that help you.
> >
> >
> >
> > Regards.
> >
> >
> >
> > On Fri, Jan 6, 2012 at 5:01 PM, Wes Modes <wmodes at ucsc.edu
> > <mailto:wmodes at ucsc.edu>> wrote:
> >
> > Hi, Steven.
> >
> > I've tried just about every possible combination of hostname and
> > cluster.conf.
> >
> > ping to test01 resolves to 128.114.31.112
> > ping to test01.gdao.ucsc.edu <http://test01.gdao.ucsc.edu>
> > resolves to 128.114.31.112
> >
> > It feels like the right thing is being returned. This feels like it
> > might be a quirk (or bug possibly) of cman or openais.
> >
> > There are some old bug reports around this, for example
> > https://bugzilla.redhat.com/show_bug.cgi?id=488565. It sounds
> > like the
> > way that cman reports this error is anything but straightforward.
> >
> > Is there anyone who has encountered this error and found a solution?
> >
> > Wes
> >
> >
> >
> > On 1/6/2012 2:00 AM, Steven Whitehouse wrote:
> > > Hi,
> > >
> > > On Thu, 2012-01-05 at 13:54 -0800, Wes Modes wrote:
> > >> Howdy, y'all. I'm trying to set up GFS in a cluster on CentOS
> > systems
> > >> running on vmWare. The GFS FS is on a Dell Equilogic SAN.
> > >>
> > >> I keep running into the same problem despite many
> > differently-flavored
> > >> attempts to set up GFS. The problem comes when I try to start
> > cman, the
> > >> cluster management software.
> > >>
> > >> [root at test01]# service cman start
> > >> Starting cluster:
> > >> Loading modules... done
> > >> Mounting configfs... done
> > >> Starting ccsd... done
> > >> Starting cman... failed
> > >> cman not started: Can't find local node name in cluster.conf
> > >> /usr/sbin/cman_tool: aisexec daemon didn't start
> > >>
> > [FAILED]
> > >>
> > > This looks like what it says... whatever the node name is in
> > > cluster.conf, it doesn't exist when the name is looked up, or
> > possibly
> > > it does exist, but is mapped to the loopback address (it needs to
> > map to
> > > an address which is valid cluster-wide)
> > >
> > > Since your config files look correct, the next thing to check is
> what
> > > the resolver is actually returning. Try (for example) a ping to
> > test01
> > > (you need to specify exactly the same form of the name as is used
> in
> > > cluster.conf) from test02 and see whether it uses the correct ip
> > > address, just in case the wrong thing is being returned.
> > >
> > > Steve.
> > >
> > >> [root at test01]# tail /var/log/messages
> > >> Jan 5 13:39:40 testbench06 ccsd[13194]: Unable to connect to
> > >> cluster infrastructure after 1193640 seconds.
> > >> Jan 5 13:40:10 testbench06 ccsd[13194]: Unable to connect to
> > >> cluster infrastructure after 1193670 seconds.
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] AIS
> Executive
> > >> Service RELEASE 'subrev 1887 version 0.80.6'
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] Copyright
> (C)
> > >> 2002-2006 MontaVista Software, Inc and contributors.
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] Copyright
> (C)
> > >> 2006 Red Hat, Inc.
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] AIS
> Executive
> > >> Service: started and ready to provide service.
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] local
> > node name
> > >> "test01.gdao.ucsc.edu <http://test01.gdao.ucsc.edu>" not found
> > in cluster.conf
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] Error
> > reading CCS
> > >> info, cannot start
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] Error
> reading
> > >> config from CCS
> > >> Jan 5 13:40:24 testbench06 openais[3939]: [MAIN ] AIS
> Executive
> > >> exiting (reason: could not read the main configuration file).
> > >>
> > >> Here are details of my configuration:
> > >>
> > >> [root at test01]# rpm -qa | grep cman
> > >> cman-2.0.115-85.el5_7.2
> > >>
> > >> [root at test01]# echo $HOSTNAME
> > >> test01.gdao.ucsc.edu <http://test01.gdao.ucsc.edu>
> > >>
> > >> [root at test01]# hostname
> > >> test01.gdao.ucsc.edu <http://test01.gdao.ucsc.edu>
> > >>
> > >> [root at test01]# cat /etc/hosts
> > >> # Do not remove the following line, or various programs
> > >> # that require network functionality will fail.
> > >> 128.114.31.112 test01 test01.gdao test01.gdao.ucsc.edu
> > <http://test01.gdao.ucsc.edu>
> > >> 128.114.31.113 test02 test02.gdao test02.gdao.ucsc.edu
> > <http://test02.gdao.ucsc.edu>
> > >> 127.0.0.1 localhost.localdomain localhost
> > >> ::1 localhost6.localdomain6 localhost6
> > >>
> > >> [root at test01]# sestatus
> > >> SELinux status: enabled
> > >> SELinuxfs mount: /selinux
> > >> Current mode: permissive
> > >> Mode from config file: permissive
> > >> Policy version: 21
> > >> Policy from config file: targeted
> > >>
> > >> [root at test01]# cat /etc/cluster/cluster.conf
> > >> <?xml version="1.0"?>
> > >> <cluster config_version="25" name="gdao_cluster">
> > >> <fence_daemon post_fail_delay="0" post_join_delay="120"/>
> > >> <clusternodes>
> > >> <clusternode name="test01" nodeid="1" votes="1">
> > >> <fence>
> > >> <method name="single">
> > >> <device name="gfs_vmware"/>
> > >> </method>
> > >> </fence>
> > >> </clusternode>
> > >> <clusternode name="test02" nodeid="2" votes="1">
> > >> <fence>
> > >> <method name="single">
> > >> <device name="gfs_vmware"/>
> > >> </method>
> > >> </fence>
> > >> </clusternode>
> > >> </clusternodes>
> > >> <cman/>
> > >> <fencedevices>
> > >> <fencedevice agent="fence_manual" name="gfs1_ipmi"/>
> > >> <fencedevice agent="fence_vmware" name="gfs_vmware"
> > >> ipaddr="gdvcenter.ucsc.edu <http://gdvcenter.ucsc.edu>"
> > login="root" passwd="1hateAmazon.com"
> > >> vmlogin="root" vmpasswd="esxpass"
> > >>
> >
> port="/vmfs/volumes/49086551-c64fd83c-0401-001e0bcd6848/eagle1/gfs1.vmx"/>
> > >> </fencedevices>
> > >> <rm>
> > >> <failoverdomains/>
> > >> </rm>
> > >> </cluster>
> > >>
> > >> I've seen much discussion of this problem, but no definitive
> > solutions.
> > >> Any help you can provide will be welcome.
> > >>
> > >> Wes Modes
> > >>
> > >> --
> > >> Linux-cluster mailing list
> > >> Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> > >> https://www.redhat.com/mailman/listinfo/linux-cluster
> > >
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> >
> >
> >
> > --
> > Luiz Gustavo P Tonello.
> >
> >
> >
> > --
> >
> > Linux-cluster mailing list
> >
> > Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> >
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
> >
> >
> >
> >
> >
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.redhat.com/archives/linux-cluster/attachments/20120108/707d1029/attachment.html
> >
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 93, Issue 7
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120109/15b40ba3/attachment.htm>
More information about the Linux-cluster
mailing list