[Linux-cluster] Re: Linux-cluster Digest, Vol 57, Issue 5

Jacques Duplessis duplessis.jacques at gmail.com
Tue Jan 6 23:56:56 UTC 2009


# Add theses lines to syslog.conf file & Restart syslog
# ========================================================
# vi /etc/syslog.conf

# rgmanager log
  local4.*                     /var/log/rgmanager

# Create log file before restarting the syslog
# ========================================================
# touch /var/log/rgmanager
# chmod 644 /var/log/manager
# chown root.root /var/log/rgmanager

# service syslog restart
Shutting down kernel logger: [  OK  ]
Shutting down system logger: [  OK  ]
Starting system logger: [  OK  ]
Starting kernel logger: [  OK  ]

# Change cluster config file to log rgmanager info
# ========================================================

# vi /etc/cluster/cluster.conf

change line
<rm>
to
<rm log_facility="local4" log_level="7">



# Push changes to all cluster nodes
# ========================================================

# ccs_tool update /etc/cluster/cluster.conf

Unplug and plug back network cable on the node and
look at the /var/log/rgmanager file.
May contain usefull info for us.





On Tue, Jan 6, 2009 at 12:00 PM, <linux-cluster-request at redhat.com> wrote:

> Send Linux-cluster mailing list submissions to
>        linux-cluster at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://www.redhat.com/mailman/listinfo/linux-cluster
> or, via email, send a message with subject or body 'help' to
>        linux-cluster-request at redhat.com
>
> You can reach the person managing the list at
>        linux-cluster-owner at redhat.com
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-cluster digest..."
>
>
> Today's Topics:
>
>   1. Re: Re: Fencing test (Paras pradhan)
>   2. problem adding new node to an existing cluster
>      (Greenseid, Joseph M.)
>   3. Re: problem adding new node to an existing cluster (Bob Peterson)
>   4. RE: problem adding new node to an existing cluster
>      (Greenseid, Joseph M.)
>   5. RE: problem adding new node to an existing cluster
>      (Greenseid, Joseph M.)
>   6. RE: problem adding new node to an existing cluster
>      (Greenseid, Joseph M.)
>   7. Re: problem adding new node to an existing cluster (Bob Peterson)
>   8. RE: problem adding new node to an existing cluster
>      (Greenseid, Joseph M.)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 5 Jan 2009 12:11:24 -0600
> From: "Paras pradhan" <pradhanparas at gmail.com>
> Subject: Re: [Linux-cluster] Re: Fencing test
> To: "linux clustering" <linux-cluster at redhat.com>
> Message-ID:
>        <8b711df40901051011x79066243g38108439ffb1075f at mail.gmail.com>
> Content-Type: text/plain; charset=ISO-8859-1
>
> hi,
>
> On Mon, Jan 5, 2009 at 8:23 AM, Rajagopal Swaminathan
> <raju.rajsand at gmail.com> wrote:
> > Greetings,
> >
> > On Sat, Jan 3, 2009 at 4:18 AM, Paras pradhan <pradhanparas at gmail.com>
> wrote:
> >>
> >> Here I am using 4 nodes.
> >>
> >> Node 1) That runs luci
> >> Node 2) This is my iscsi shared storage where my virutal machine(s)
> resides
> >> Node 3) First node in my two node cluster
> >> Node 4) Second node in my two node cluster
> >>
> >> All of them are connected simply to an unmanaged 16 port switch.
> >
> > Luci need not require a separate node to run. it can run on one of the
> > member nodes (node 3 | 4).
>
> OK.
>
> >
> > what does clustat say?
>
> Here is my clustat o/p:
>
> -----------
>
> [root at ha1lx ~]# clustat
> Cluster Status for ipmicluster @ Mon Jan  5 12:00:10 2009
> Member Status: Quorate
>
>  Member Name                                                     ID
> Status
>  ------ ----                                                     ----
> ------
>  10.42.21.29                                                         1
> Online, rgmanager
>  10.42.21.27                                                         2
> Online, Local, rgmanager
>
>  Service Name
> Owner (Last)                                                     State
>  ------- ----
> ----- ------                                                     -----
>  vm:linux64
> 10.42.21.27
> started
> [root at ha1lx ~]#
> ------------------------
>
>
> 10.42.21.27 is node3 and 10.42.21.29 is node4
>
>
>
> >
> > Can you post your cluster.conf here?
>
> Here is my cluster.conf
>
> --
> [root at ha1lx cluster]# more cluster.conf
> <?xml version="1.0"?>
> <cluster alias="ipmicluster" config_version="8" name="ipmicluster">
>        <fence_daemon clean_start="0" post_fail_delay="0"
> post_join_delay="3"/>
>        <clusternodes>
>                <clusternode name="10.42.21.29" nodeid="1" votes="1">
>                        <fence>
>                                <method name="1">
>                                        <device name="fence2"/>
>                                </method>
>                        </fence>
>                </clusternode>
>                <clusternode name="10.42.21.27" nodeid="2" votes="1">
>                        <fence>
>                                <method name="1">
>                                        <device name="fence1"/>
>                                </method>
>                        </fence>
>                </clusternode>
>        </clusternodes>
>        <cman expected_votes="1" two_node="1"/>
>        <fencedevices>
>                <fencedevice agent="fence_ipmilan" ipaddr="10.42.21.28"
> login="admin" name="fence1" passwd="admin"/>
>                <fencedevice agent="fence_ipmilan" ipaddr="10.42.21.30"
> login="admin" name="fence2" passwd="admin"/>
>        </fencedevices>
>        <rm>
>                <failoverdomains>
>                        <failoverdomain name="myfd" nofailback="0"
> ordered="1" restricted="0">
>                                <failoverdomainnode name="10.42.21.29"
> priority="2"/>
>                                <failoverdomainnode name="10.42.21.27"
> priority="1"/>
>                        </failoverdomain>
>                </failoverdomains>
>                <resources/>
>                <vm autostart="1" domain="myfd" exclusive="0" migrate="live"
> name="linux64" path="/guest_roots" recovery="restart"/>
>        </rm>
> </cluster>
> ------
>
>
> Here:
>
> 10.42.21.28 is IPMI interface in node3
> 10.42.21.30 is IPMI interface in node4
>
>
>
>
>
>
>
>
> >
> > When you pull out the network cable *and* plug it back  in say node 3,
> > , what messages appear in the /var/log/messages if Node 4 (if any)?
> > (sorry for the repitition, but messages are necessary here to make any
> > sense of the situation)
> >
>
> Ok here is the log in node 4 after i disconnect the network cable in node3.
>
> -----------
>
> Jan  5 12:05:24 ha2lx openais[4988]: [TOTEM] The token was lost in the
> OPERATIONAL state.
> Jan  5 12:05:24 ha2lx openais[4988]: [TOTEM] Receive multicast socket
> recv buffer size (288000 bytes).
> Jan  5 12:05:24 ha2lx openais[4988]: [TOTEM] Transmit multicast socket
> send buffer size (262142 bytes).
> Jan  5 12:05:24 ha2lx openais[4988]: [TOTEM] entering GATHER state from 2.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] entering GATHER state from 0.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] Creating commit token
> because I am the rep.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] Saving state aru 76 high
> seq received 76
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] Storing new sequence id
> for ring ac
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] entering COMMIT state.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] entering RECOVERY state.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] position [0] member
> 10.42.21.29:
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] previous ring seq 168 rep
> 10.42.21.27
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] aru 76 high delivered 76
> received flag 1
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] Did not need to originate
> any messages in recovery.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] Sending initial ORF token
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] CLM CONFIGURATION CHANGE
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] New Configuration:
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.29)
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] Members Left:
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.27)
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] Members Joined:
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] CLM CONFIGURATION CHANGE
> Jan  5 12:05:28 ha2lx kernel: dlm: closing connection to node 2
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] New Configuration:
> Jan  5 12:05:28 ha2lx fenced[5004]: 10.42.21.27 not a cluster member
> after 0 sec post_fail_delay
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.29)
> Jan  5 12:05:28 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Trying to acquire journal lock...
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] Members Left:
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] Members Joined:
> Jan  5 12:05:28 ha2lx openais[4988]: [SYNC ] This node is within the
> primary component and will provide service.
> Jan  5 12:05:28 ha2lx openais[4988]: [TOTEM] entering OPERATIONAL state.
> Jan  5 12:05:28 ha2lx openais[4988]: [CLM  ] got nodejoin message
> 10.42.21.29
> Jan  5 12:05:28 ha2lx openais[4988]: [CPG  ] got joinlist message from node
> 1
> Jan  5 12:05:28 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Looking at journal...
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Acquiring the transaction lock...
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Replaying journal...
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Replayed 0 of 0 blocks
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Found 0 revoke tags
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0:
> jid=1: Journal replayed in 1s
> Jan  5 12:05:29 ha2lx kernel: GFS2: fsid=ipmicluster:guest_roots.0: jid=1:
> Done
> ------------------
>
> Now when I plug back my cable to node3, node 4 reboots and here is the
> quickly grabbed log in node4
>
>
> --
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] entering GATHER state from 11.
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] Saving state aru 1d high
> seq received 1d
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] Storing new sequence id
> for ring b0
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] entering COMMIT state.
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] entering RECOVERY state.
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] position [0] member
> 10.42.21.27:
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] previous ring seq 172 rep
> 10.42.21.27
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] aru 16 high delivered 16
> received flag 1
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] position [1] member
> 10.42.21.29:
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] previous ring seq 172 rep
> 10.42.21.29
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] aru 1d high delivered 1d
> received flag 1
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] Did not need to originate
> any messages in recovery.
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] CLM CONFIGURATION CHANGE
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] New Configuration:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.29)
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] Members Left:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] Members Joined:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] CLM CONFIGURATION CHANGE
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] New Configuration:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.27)
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.29)
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] Members Left:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ] Members Joined:
> Jan  5 12:07:12 ha2lx openais[4988]: [CLM  ]    r(0) ip(10.42.21.27)
> Jan  5 12:07:12 ha2lx openais[4988]: [SYNC ] This node is within the
> primary component and will provide service.
> Jan  5 12:07:12 ha2lx openais[4988]: [TOTEM] entering OPERATIONAL state.
> Jan  5 12:07:12 ha2lx openais[4988]: [MAIN ] Killing node 10.42.21.27
> because it has rejoined the cluster with existing state
> Jan  5 12:07:12 ha2lx openais[4988]: [CMAN ] cman killed by node 2
> because we rejoined the cluster without a full restart
> Jan  5 12:07:12 ha2lx gfs_controld[5016]: groupd_dispatch error -1 errno 11
> Jan  5 12:07:12 ha2lx gfs_controld[5016]: groupd connection died
> Jan  5 12:07:12 ha2lx gfs_controld[5016]: cluster is down, exiting
> Jan  5 12:07:12 ha2lx dlm_controld[5010]: cluster is down, exiting
> Jan  5 12:07:12 ha2lx kernel: dlm: closing connection to node 1
> Jan  5 12:07:12 ha2lx fenced[5004]: cluster is down, exiting
> -------
>
>
> Also here is the log of node3:
>
> --
> [root at ha1lx ~]# tail -f /var/log/messages
> Jan  5 12:07:24 ha1lx openais[26029]: [TOTEM] entering OPERATIONAL state.
> Jan  5 12:07:24 ha1lx openais[26029]: [CLM  ] got nodejoin message
> 10.42.21.27
> Jan  5 12:07:24 ha1lx openais[26029]: [CLM  ] got nodejoin message
> 10.42.21.27
> Jan  5 12:07:24 ha1lx openais[26029]: [CPG  ] got joinlist message from
> node 2
> Jan  5 12:07:27 ha1lx ccsd[26019]: Attempt to close an unopened CCS
> descriptor (4520670).
> Jan  5 12:07:27 ha1lx ccsd[26019]: Error while processing disconnect:
> Invalid request descriptor
> Jan  5 12:07:27 ha1lx fenced[26045]: fence "10.42.21.29" success
> Jan  5 12:07:27 ha1lx kernel: GFS2: fsid=ipmicluster:guest_roots.1:
> jid=0: Trying to acquire journal lock...
> Jan  5 12:07:27 ha1lx kernel: GFS2: fsid=ipmicluster:guest_roots.1:
> jid=0: Looking at journal...
> Jan  5 12:07:28 ha1lx kernel: GFS2: fsid=ipmicluster:guest_roots.1: jid=0:
> Done
> ----------------
>
>
>
>
>
>
>
>
>
>
>
>
> > HTH
> >
> > With warm regards
> >
> > Rajagopal
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> >
>
>
> Thanks a lot
>
> Paras.
>
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 5 Jan 2009 14:18:10 -0600
> From: "Greenseid, Joseph M." <Joseph.Greenseid at ngc.com>
> Subject: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: <linux-cluster at redhat.com>
> Message-ID:
>        <D089B7B0C0FBCD498494B5A0AA74827DDB386E at XMBIL112.northgrum.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> hi all,
>
> i am trying to add a new node to an existing 3 node GFS cluster.
>
> i followed the steps in the online docs for this, so i went onto the 1st
> node in my existing cluster, run system-config-cluster, added a new node and
> fence for it, then propagated that out to the existing nodes, and scp'd the
> cluster.conf file to the new node.
>
> at that point, i confirmed that multipath and mdadm config files were
> synced with my other nodes, the new node can properly see the SAN that
> they're all sharing, etc.
>
> i then started cman, which seemed to start without any trouble.  i tried to
> start clvmd, but it says:
>
> Activating VGs: Skipping clustered volume group san01
>
> my VG is named "san01," so it can see the volume group, it just won't
> activate it for some reason.  any ideas what i'm doing wrong?
>
> thanks,
> --Joe
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://www.redhat.com/archives/linux-cluster/attachments/20090105/d4760d53/attachment.html
>
> ------------------------------
>
> Message: 3
> Date: Mon, 5 Jan 2009 15:25:36 -0500 (EST)
> From: Bob Peterson <rpeterso at redhat.com>
> Subject: Re: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: linux clustering <linux-cluster at redhat.com>
> Message-ID:
>        <
> 868569604.2835591231187135219.JavaMail.root at zmail02.collab.prod.int.phx2.redhat.com
> >
>
> Content-Type: text/plain; charset=utf-8
>
> ----- "Joseph M. Greenseid" <Joseph.Greenseid at ngc.com> wrote:
> | hi all,
> |
> | i am trying to add a new node to an existing 3 node GFS cluster.
> |
> | i followed the steps in the online docs for this, so i went onto the
> | 1st node in my existing cluster, run system-config-cluster, added a
> | new node and fence for it, then propagated that out to the existing
> | nodes, and scp'd the cluster.conf file to the new node.
> |
> | at that point, i confirmed that multipath and mdadm config files were
> | synced with my other nodes, the new node can properly see the SAN that
> | they're all sharing, etc.
> |
> | i then started cman, which seemed to start without any trouble. i
> | tried to start clvmd, but it says:
> |
> | Activating VGs: Skipping clustered volume group san01
> |
> | my VG is named "san01," so it can see the volume group, it just won't
> | activate it for some reason. any ideas what i'm doing wrong?
> |
> | thanks,
> | --Joe
>
> Hi Joe,
>
> Make sure that you have clvmd service running on the new node
> ("chkconfig clvmd on" and/or "service clvmd start" as necessary).
> Also, make sure the lock_type is 2 (RHEL4/similar) or 3 (RHEL5/similar)
> in the /etc/lvm/lvm.conf file.
>
> Regards,
>
> Bob Peterson
> Red Hat GFS
>
>
>
> ------------------------------
>
> Message: 4
> Date: Mon, 5 Jan 2009 14:28:12 -0600
> From: "Greenseid, Joseph M." <Joseph.Greenseid at ngc.com>
> Subject: RE: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: "linux clustering" <linux-cluster at redhat.com>
> Message-ID:
>        <D089B7B0C0FBCD498494B5A0AA74827DDB386F at XMBIL112.northgrum.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> ---- "Joseph M. Greenseid" <Joseph.Greenseid at ngc.com> wrote:
> | hi all,
> |
> | i am trying to add a new node to an existing 3 node GFS cluster.
> |
> | i followed the steps in the online docs for this, so i went onto the
> | 1st node in my existing cluster, run system-config-cluster, added a
> | new node and fence for it, then propagated that out to the existing
> | nodes, and scp'd the cluster.conf file to the new node.
> |
> | at that point, i confirmed that multipath and mdadm config files were
> | synced with my other nodes, the new node can properly see the SAN that
> | they're all sharing, etc.
> |
> | i then started cman, which seemed to start without any trouble. i
> | tried to start clvmd, but it says:
> |
> | Activating VGs: Skipping clustered volume group san01
> |
> | my VG is named "san01," so it can see the volume group, it just won't
> | activate it for some reason. any ideas what i'm doing wrong?
> |
> | thanks,
> | --Joe
>
> > Hi Joe,
>
> > Make sure that you have clvmd service running on the new node
> > ("chkconfig clvmd on" and/or "service clvmd start" as necessary).
>
> Hi Bob,
>
> Yes, this problem started when I tried to start clvmd (/sbin/service clvmd
> start).
>
>
> > Also, make sure the lock_type is 2 (RHEL4/similar) or 3 (RHEL5/similar)
> > in the /etc/lvm/lvm.conf file.
>
> Ah, Ok, I believe this may be the trouble.  My lock_type was 1.  I'll
> change it and try again.  Thanks.
>
> --Joe
>
> > Regards,
>
> > Bob Peterson
> > Red Hat GFS
>
>
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/ms-tnef
> Size: 4399 bytes
> Desc: not available
> Url :
> https://www.redhat.com/archives/linux-cluster/attachments/20090105/6da60c4d/attachment.bin
>
> ------------------------------
>
> Message: 5
> Date: Mon, 5 Jan 2009 15:10:29 -0600
> From: "Greenseid, Joseph M." <Joseph.Greenseid at ngc.com>
> Subject: RE: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: "linux clustering" <linux-cluster at redhat.com>,      "linux clustering"
>        <linux-cluster at redhat.com>
> Message-ID:
>        <D089B7B0C0FBCD498494B5A0AA74827DDB3872 at XMBIL112.northgrum.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> > Also, make sure the lock_type is 2 (RHEL4/similar) or 3 (RHEL5/similar)
> > in the /etc/lvm/lvm.conf file.
>
> This fixed it.  Thanks.
>
> --Joe
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://www.redhat.com/archives/linux-cluster/attachments/20090105/0999baeb/attachment.html
>
> ------------------------------
>
> Message: 6
> Date: Mon, 5 Jan 2009 16:01:45 -0600
> From: "Greenseid, Joseph M." <Joseph.Greenseid at ngc.com>
> Subject: RE: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: "linux clustering" <linux-cluster at redhat.com>
> Message-ID:
>        <D089B7B0C0FBCD498494B5A0AA74827DDB3873 at XMBIL112.northgrum.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi,
>
> I have a new question.  When I created this file system a year ago, I
> didn't anticipate needing any additional nodes other than the original 3 I
> set up.  Consequently, I have 3 journals.  Now that I've been told to add a
> fourth node, is there a way to add a journal to an existing file system that
> resides on a volume that has not been expanded (the docs appear to read that
> you can only do it to an expanded volume because the additional journal(s)
> take up additional space).  My file system isn't full, though my volume is
> fully used by the formatted GFS file system.
>
> Is there anything I can do that won't involve destroying my existing file
> system?
>
> Thanks,
> --Joe
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: not available
> Type: application/ms-tnef
> Size: 3699 bytes
> Desc: not available
> Url :
> https://www.redhat.com/archives/linux-cluster/attachments/20090105/ddb0e237/attachment.bin
>
> ------------------------------
>
> Message: 7
> Date: Mon, 5 Jan 2009 18:09:18 -0500 (EST)
> From: Bob Peterson <rpeterso at redhat.com>
> Subject: Re: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: linux clustering <linux-cluster at redhat.com>
> Message-ID:
>        <
> 291064814.51231196957732.JavaMail.root at zmail02.collab.prod.int.phx2.redhat.com
> >
>
> Content-Type: text/plain; charset=utf-8
>
> ----- "Joseph M. Greenseid" <Joseph.Greenseid at ngc.com> wrote:
> | Hi,
> |
> | I have a new question.  When I created this file system a year ago, I
> | didn't anticipate needing any additional nodes other than the original
> | 3 I set up.  Consequently, I have 3 journals.  Now that I've been told
> | to add a fourth node, is there a way to add a journal to an existing
> | file system that resides on a volume that has not been expanded (the
> | docs appear to read that you can only do it to an expanded volume
> | because the additional journal(s) take up additional space).  My file
> | system isn't full, though my volume is fully used by the formatted GFS
> | file system.
> |
> | Is there anything I can do that won't involve destroying my existing
> | file system?
> |
> | Thanks,
> | --Joe
>
> Hi Joe,
>
> Journals for gfs file systems are carved out during mkfs.  The rest of the
> space is used for data and metadata.  So there are only two ways to
> make journals: (1) Do another mkfs which will destroy your file system
> or (2) if you're using lvm, add more storage with something like
> lvresize or lvextend, then use gfs_jadd to add the new journal to the
> new chunk of storage.
>
> We realize that's a pain, and that's why we took away that restriction
> in gfs2.  In gfs2, journals are kept as a hidden part of the file system,
> so they can be added painlessly to an existing file system without
> adding storage.   So I guess a third option would be to convert the file
> system to gfs2 using gfs2_convert, add the journal with gfs2_jadd, then
> use it as gfs2 from then on.  But please be aware that gfs2_convert had
> some
> serious problems until the 5.3 version that was committed to the cluster
> git tree in December, (i.e. the very latest and greatest "RHEL5", "RHEL53",
> "master", "STABLE2" or "STABLE3" versions in the cluster git (source code)
> tree.)  Make ABSOLUTELY CERTAIN that you have a working & recent backup and
> restore option before you try this.  Also, the GFS2 kernel code prior to
> 5.3 is considered tech preview as well, so not ready for production use.
> So if you're not building from source code, you should wait until RHEL5.3
> or Centos5.3 (or similar) before even considering this option.
>
> Regards,
>
> Bob Peterson
> Red Hat GFS
>
>
>
> ------------------------------
>
> Message: 8
> Date: Tue, 6 Jan 2009 07:57:21 -0600
> From: "Greenseid, Joseph M." <Joseph.Greenseid at ngc.com>
> Subject: RE: [Linux-cluster] problem adding new node to an existing
>        cluster
> To: "linux clustering" <linux-cluster at redhat.com>,      "linux clustering"
>        <linux-cluster at redhat.com>
> Message-ID:
>        <D089B7B0C0FBCD498494B5A0AA74827DDB3875 at XMBIL112.northgrum.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> ---- "Joseph M. Greenseid" <Joseph.Greenseid at ngc.com> wrote:
> | Hi,
> |
> | I have a new question.  When I created this file system a year ago, I
> | didn't anticipate needing any additional nodes other than the original
> | 3 I set up.  Consequently, I have 3 journals.  Now that I've been told
> | to add a fourth node, is there a way to add a journal to an existing
> | file system that resides on a volume that has not been expanded (the
> | docs appear to read that you can only do it to an expanded volume
> | because the additional journal(s) take up additional space).  My file
> | system isn't full, though my volume is fully used by the formatted GFS
> | file system.
> |
> | Is there anything I can do that won't involve destroying my existing
> | file system?
> |
> | Thanks,
> | --Joe
>
> > Hi Joe,
>
> > Journals for gfs file systems are carved out during mkfs.  The rest of
> the
> > space is used for data and metadata.  So there are only two ways to
> > make journals: (1) Do another mkfs which will destroy your file system
> > or (2) if you're using lvm, add more storage with something like
> > lvresize or lvextend, then use gfs_jadd to add the new journal to the
> > new chunk of storage.
> >
>
> Ok, so I did understand correctly.  That's at least something positive.  :)
>
>
> > We realize that's a pain, and that's why we took away that restriction
> > in gfs2.  In gfs2, journals are kept as a hidden part of the file system,
> > so they can be added painlessly to an existing file system without
> > adding storage.   So I guess a third option would be to convert the file
> > system to gfs2 using gfs2_convert, add the journal with gfs2_jadd, then
> > use it as gfs2 from then on.  But please be aware that gfs2_convert had
> some
> > serious problems until the 5.3 version that was committed to the cluster
> > git tree in December, (i.e. the very latest and greatest "RHEL5",
> "RHEL53",
> > "master", "STABLE2" or "STABLE3" versions in the cluster git (source
> code)
> > tree.)  Make ABSOLUTELY CERTAIN that you have a working & recent backup
> and
> > restore option before you try this.  Also, the GFS2 kernel code prior to
> > 5.3 is considered tech preview as well, so not ready for production use.
> > So if you're not building from source code, you should wait until RHEL5.3
> > or Centos5.3 (or similar) before even considering this option.
> >
>
>
> Ok, I have an earlier version of GFS2, so I guess I'm going to need to sit
> down and figure out a better strategy for what I've been asked to do.  I
> appreciate the help with my questions, though.  Thanks again.
>
> --Joe
>
> > Regards,
> >
> > Bob Peterson
> > Red Hat GFS
>
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
> https://www.redhat.com/archives/linux-cluster/attachments/20090106/78398c16/attachment.html
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 57, Issue 5
> ********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090106/53e589bd/attachment.htm>


More information about the Linux-cluster mailing list