[Linux-cluster] Re: Linux-cluster Digest, Vol 33, Issue 24

Vikrant Telkar vikrant.telkar at sunguru.com
Sat Jan 13 19:06:04 UTC 2007


Hello,
           I need help to install and configure Redhat Cluster. Need following thing for same.
1. I need documents which will help me to install and configure same.
2. From where can I download clusting software.
3. I also have to install GFS.

Thanks and Regards
vikrant.


--- linux-cluster-request at redhat.com wrote:

From: linux-cluster-request at redhat.com
To: linux-cluster at redhat.com
Subject: Linux-cluster Digest, Vol 33, Issue 24
Date: Fri, 12 Jan 2007 16:19:23 -0500 (EST)

Send Linux-cluster mailing list submissions to
	linux-cluster at redhat.com

To subscribe or unsubscribe via the World Wide Web, visit
	https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
	linux-cluster-request at redhat.com

You can reach the person managing the list at
	linux-cluster-owner at redhat.com

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."


Today's Topics:

   1. Re: Can't leave cluster (Robert Peterson)
   2. Re: RH Cluster doesn't pass basic acceptance tests	-	bug in
      fenced? (Lon Hohberger)
   3. Quorum disk question (Fedele Stabile)
   4. RE: GFS+EXT3 via NFS? (Lon Hohberger)
   5. Re: Quorum disk question (Jayson Vantuyl)
   6. Re: Maybe OT: GFS labels for iSCSI disks (James Parsons)
   7. Re: 2 missing patches in HEAD and RHEL5 branch.	(rg_state.c
      and ip.sh) (Lon Hohberger)
   8. Re: Quorum disk question (Lon Hohberger)
   9. Re: ccsd problems (Lon Hohberger)
  10. Re: RH Cluster doesn't pass basic acceptance tests	-	bug in
      fenced? (Jayson Vantuyl)
  11. GFS UID/GID limit (Leonard Maiorani)
  12. Re: GFS UID/GID limit (Abhijith Das)
  13. Re: ccsd problems (Andre Henry)
  14. Re: GFS UID/GID limit (Lon Hohberger)
  15. Re: 2 missing patches in HEAD and RHEL5 branch.	(rg_state.c
      and ip.sh) (Lon Hohberger)
  16. Re: Can't leave cluster (isplist at logicore.net)


----------------------------------------------------------------------

Message: 1
Date: Fri, 12 Jan 2007 11:27:25 -0600
From: Robert Peterson <rpeterso at redhat.com>
Subject: Re: [Linux-cluster] Can't leave cluster
To: isplist at logicore.net, linux clustering <linux-cluster at redhat.com>
Message-ID: <45A7C4FD.8010908 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

isplist at logicore.net wrote:
> That was indeed what it was. Here is my final shutdown script;
>
> service httpd stop
> umount /var/www
> vgchange -aln
> service clvmd stop
> fence_tool leave
> service fenced stop
> service rgmanager stop
> cman_tool leave
> killall ccsd
>
> Two questions;
>
> 1: I probably don't need the last line in there correct?
>
> 2: Can I create a new service so that I can run this script to shut things 
> down cleanly when I want to reboot the node? If so, what is the process?
>
> Mike
>   
Hi Mike,

1. I recommend "service ccsd stop" rather than killall ccsd.
2. In theory, this script should not be necessary on a RHEL, Fedora Core 
or centos box
    if you have your service scripts set up and chkconfig'ed on.  When 
you do
    /sbin/reboot, the service scripts are supposed to run in the correct 
order
    and take care of all this for you. Shudown should take you to 
runlevel 6,
    which should run the shutdown scripts in /etc/rc.d/rc6.d in the Kxx 
order.
    The httpd script should stop that service, then the
    the gfs script should take care of unmounting the gfs file systems 
at "stop".
    The clvmd script should take care of deactivating the vgs.  And the 
Kxx numbers
    should be set properly at install time to ensure the proper order.
    If there's a problem shutting down with the normal scripts, perhaps 
we need to
    file a bug and get the scripts changed.

Regards,

Bob Peterson
Red Hat Cluster Suite



------------------------------

Message: 2
Date: Fri, 12 Jan 2007 12:56:32 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] RH Cluster doesn't pass basic acceptance
	tests	-	bug in fenced?
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168624592.15369.495.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 13:41 +0100, Miroslav Zubcic wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Josef Whiter wrote:
> 
> > This isn't a bug, its working as expected.
> 
> IT People from the central bank doesn't think like that. I cannot blame
> them, because it is strange to me, and to anybody who has seen this RH
> cluster behaviour.
> 
> > What you need in qdisk, set it up
> > with the proper hueristics and it will force the shutdown of the bad node before
> > the bad node has a chance to fence off the working node.
> 
> This is just a workaround for lack of communication between clurgmgrd and
> fenced daemons, where first is aware of ethernet/network failure and is
> trying to disable active service, and fenced which is fencing other node
> without any good reason, because it doesn't know that it's node is faulty one.

There is no assumed correlation between the NIC(s) rgmanager uses for
services and the NIC(s) CMAN uses; many people use one network for
cluster traffic and another for service related traffic.  In this case,
a service failure due to a NIC link failing is far less of a problem:
The service fails, and it moves somewhere else in the cluster.

More generally, health of part of an rgmanager service != health of a
node.  They are independent, despite sometimes being correlative.


> I have even better workaround (one bonding with native data ethernet and
> tagged vlan for fence subnet) for this silly behaviour, but I will really
> like to see this thing fixed, because people are laughing on us when
> testing our cluster configurations (we are configuring Red Hat machines
> and clusters).

I think it's interesting to point out that CMAN, when run in 2-node
mode, expects the fencing devices and cluster paths to be on the same
links.  This has the effect that whenever you pull the links out of a
node, that node actually can not possibly fence the "good" node because
it can't reach the fence devices.  It sounds like you altered your
configuration to match this using vlans over the same links.

As a side note, it would also be trivial to add a 'reboot on link loss'
option to the IP script in rgmanager. *shrug*.

-- Lon



------------------------------

Message: 3
Date: Fri, 12 Jan 2007 18:58:28 +0100
From: Fedele Stabile <fedele at fis.unical.it>
Subject: [Linux-cluster] Quorum disk question
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <45A7CC44.4010409 at fis.unical.it>
Content-Type: text/plain; format=flowed; charset=ISO-8859-1

Can I configure a quorum disk giving it only a vote option and no heuristic parameters?
I.e.

in /etc/cluster.conf can i configure quorum disk in this way?

<quorumd interval="1" tko="10" votes="3" label="Quorum-disk">
</quorumd>


Thank you Fedele



------------------------------

Message: 4
Date: Fri, 12 Jan 2007 13:03:27 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: RE: [Linux-cluster] GFS+EXT3 via NFS?
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168625007.15369.499.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 10:09 -0500, Kovacs, Corey J. wrote:
> Lon, thanks. I could manually type the cluster.conf in but it would likely be
> riddled with typos :)
> 
> Suffice it to say that the exports are managed by the cluster which I think
> is the problem in our particular case as we have mixed GFS/EXT3 filesystems
> being exported from the same services. 

I wouldn't think this should matter, but it does depend on how they're
configured.  It should look something like:

  <service>
    <fs>
      <nfsexport>
        <nfsclient/>
        ...
      </nfsexport>
    </fs>
    <clusterfs>
      <nfsexport>
        <nfsclient/>
        ...
      </nfsexport>
    </clusterfs>
    <ip/>
  </service>


> That being said, what will the effect be of seperating the services by
> filesystem type if a service exporting EXT3 fails over to a node exporting
> non-cluster-managed GFS exports? Will the mechanics of moving a
> cluster-managed export to a node with non-managed exports collide?

They shouldn't - as long as the exports don't overlap.

-- Lon



------------------------------

Message: 5
Date: Fri, 12 Jan 2007 12:20:26 -0600
From: Jayson Vantuyl <jvantuyl at engineyard.com>
Subject: Re: [Linux-cluster] Quorum disk question
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <B2851925-8EFD-4222-BC92-77FEB2C8A030 at engineyard.com>
Content-Type: text/plain; charset="us-ascii"

You can.

If you can come up with good heuristics, even if it is just pinging  
your network gateway, it is very helpful at keeping your cluster  
functional.  Without this, a partition of your cluster will remain  
quorate, but not necessarily the partition that can still be reached  
from the Internet (or wherever its services are accessed from).

On Jan 12, 2007, at 11:58 AM, Fedele Stabile wrote:

> Can I configure a quorum disk giving it only a vote option and no  
> heuristic parameters?
> I.e.
>
> in /etc/cluster.conf can i configure quorum disk in this way?
>
> <quorumd interval="1" tko="10" votes="3" label="Quorum-disk">
> </quorumd>
>
>
> Thank you Fedele
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster



-- 
Jayson Vantuyl
Systems Architect
Engine Yard
jvantuyl at engineyard.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://www.redhat.com/archives/linux-cluster/attachments/20070112/c14e8c95/attachment.html

------------------------------

Message: 6
Date: Fri, 12 Jan 2007 13:20:38 -0500
From: James Parsons <jparsons at redhat.com>
Subject: Re: [Linux-cluster] Maybe OT: GFS labels for iSCSI disks
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <45A7D176.8060108 at redhat.com>
Content-Type: text/plain; charset=us-ascii; format=flowed

C. L. Martinez wrote:

> Hi,
>
> I'm using a RHEL 4 U4 with iscsitarget to serve local disks to two 
> RHEL 4U4 servers with RHCS and GFS, using RHEL initiator.
>
> When I add new raw disks to iscsitarget and restart the *iscsid* 
> service on RHEL clients with GFS, sometimes the device naming 
> (/dev/sdX) changes and it's a mess
> to find the older volumes with the new device name.
>
> Does anybody know how to use to archieve persistent device naming for 
> the *iSCSI* volumes on *RHEL4*? According to this: 
> http://people.redhat.com/mchristi/iscsi/RHEL4/doc/readme 
> <http://people.redhat.com/mchristi/iscsi/RHEL4/doc/readme>, I need to 
> use Labels , but how can I assign labels on a GFS filesystem??
>
> I think I need to use an *udev* rule for that, but I'm new to this, 
> any help or sample rule would be appreciated.
>
> Thanks.
>
>------------------------------------------------------------------------
>
>--
>Linux-cluster mailing list
>Linux-cluster at redhat.com
>https://www.redhat.com/mailman/listinfo/linux-cluster
>
Support for setting this up in the UI is planned for the next update of 
Conga...sorry that doesn't help you now, though.

-J



------------------------------

Message: 7
Date: Fri, 12 Jan 2007 13:27:57 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] 2 missing patches in HEAD and RHEL5
	branch.	(rg_state.c and ip.sh)
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168626477.15369.503.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 11:02 -0500, Lon Hohberger wrote:
> On Fri, 2007-01-12 at 14:59 +0100, Simone Gotti wrote:
> > Hi all,
> > 
> > On a 2 node openais cman cluster, I failed a network interface and
> > noticed that it didn't failed over the other node.
> > 
> > Looking at the rgmanager-2.0.16 code I noticed that:
> > 
> > handle_relocate_req is called with preferred_target = -1, but inside
> > this function, there are 2 checks to see if the preferred_target is
> > setted, the check is a 'if (preferred_target != 0)' so the function
> > thinks that a preferred target is choosed. Then, inside the cycle, the
> > only one target that really exists is "me" (as -1 isn't a real target)
> > and there a "goto exausted:", the service is then restarted only on the
> > locale node, where it fails again and so it's stopped. Changing these
> > checks to "> 0" worked. 
> > 
> > Before writing a patch I noticed that in the RHEL4 CVS tag is used a
> > NODE_ID_NONE instead of the numeric values, so the problem (not tested)
> > probably doesn't happen.

Good catch.

NODE_ID_NONE isn't in RHEL5[0]/HEAD right now; so the checks should be
">= 0" rather than "!= 0".  NODE_ID_NONE was (uint64_t)(-1) on RHEL4. 

> > The same problem is in the ip.sh resource scripts as it's missing the
> > patch for "Fix bug in ip.sh allowing start of the IP if the link was
> > down, preventing failover (linux-cluster reported)." in 1.5.2.16 of
> > RHEL4 branch.

You're right, it's missing.

-- Lon



------------------------------

Message: 8
Date: Fri, 12 Jan 2007 13:29:49 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] Quorum disk question
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168626589.15369.506.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 18:58 +0100, Fedele Stabile wrote:
> Can I configure a quorum disk giving it only a vote option and no heuristic parameters?
> I.e.
> 
> in /etc/cluster.conf can i configure quorum disk in this way?
> 
> <quorumd interval="1" tko="10" votes="3" label="Quorum-disk">
> </quorumd>

Not currently; there's a bug open about this:

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=213533

-- Lon



------------------------------

Message: 9
Date: Fri, 12 Jan 2007 13:33:02 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] ccsd problems
To: Andre Henry <andre at hudat.com>
Cc: linux-cluster at redhat.com
Message-ID: <1168626782.15369.510.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 09:27 -0500, Andre Henry wrote:
> The cluster is RHEL4. I rebooted both nodes during the maintenance  
> window and still only node one came back online. Node 2 will not even  
> start ccsd. Any ideas or debugging advice ?

What does ccsd -n say?

-- Lon




------------------------------

Message: 10
Date: Fri, 12 Jan 2007 12:34:12 -0600
From: Jayson Vantuyl <jvantuyl at engineyard.com>
Subject: Re: [Linux-cluster] RH Cluster doesn't pass basic acceptance
	tests	-	bug in fenced?
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <E4D5C0F2-5FBF-47E4-8B65-9589FA772677 at engineyard.com>
Content-Type: text/plain; charset="us-ascii"

>> This isn't a bug, its working as expected.
>
> IT People from the central bank doesn't think like that. I cannot  
> blame
> them, because it is strange to me, and to anybody who has seen this RH
> cluster behaviour.
I have seen this behavior.  It is not strange to me.  This is only  
strange to people who do not understand how quorum systems work.

>> What you need in qdisk, set it up
>> with the proper hueristics and it will force the shutdown of the  
>> bad node before
>> the bad node has a chance to fence off the working node.
>
> This is just a workaround for lack of communication between  
> clurgmgrd and
> fenced daemons, where first is aware of ethernet/network failure  
> and is
> trying to disable active service, and fenced which is fencing other  
> node
> without any good reason, because it doesn't know that it's node is  
> faulty one.

This is *NOT* a workaround for lack of communication.  clurgmgrd is  
responsible for starting and stopping services.  Fencing is  
responsible for keeping nodes running.  clurgmgrd does not have the  
information and is not the right service to handle this.

The problem is that you have a two node cluster.  If you had three  
nodes, this would not be an issue.  In a two-node cluster, the two  
nodes are both capable of fencing each other even though they no  
longer have quorum.  There is mathematically no other way to have a  
majority of 2 nodes without both of them.

The Quorum Disk allows the running nodes to use a heuristic--like the  
ethernet link check you speak of (or a ping to the network gateway  
which would also be helpful).  This heuristic allows you to  
artificially reach quorum by giving extra votes to the node that can  
still determine that it is okay.

> I have even better workaround (one bonding with native data  
> ethernet and
> tagged vlan for fence subnet) for this silly behaviour, but I will  
> really
> like to see this thing fixed, because people are laughing on us when
> testing our cluster configurations (we are configuring Red Hat  
> machines
> and clusters).
The moment that a node fails for any reason other than an ethernet  
disconnection your workaround falls apart.

If some "Central Bank" is truly your customer, then you should be  
able to obtain a third node with no problems.  Otherwise, the Quorum  
Disk provides better behavior than your "workaround" by actually  
solving the problem in a generally applicable and sophisticated way.

This is a configuration problem.  If you desire not to be laughed at  
learn how to configure your software.  Also, for what its worth, I  
don't use bonding on my machines due to the switches I utilize (I use  
bridging instead), but I would recommend keeping this for reliability  
of the ethernet, as it is an important failure case.

-- 
Jayson Vantuyl
Systems Architect
Engine Yard
jvantuyl at engineyard.com


-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://www.redhat.com/archives/linux-cluster/attachments/20070112/1f7742ef/attachment.html

------------------------------

Message: 11
Date: Thu, 11 Jan 2007 09:19:18 -0700
From: "Leonard Maiorani" <leonard.maiorani at crosswalkinc.com>
Subject: [Linux-cluster] GFS UID/GID limit
To: <linux-cluster at redhat.com>
Message-ID:
	<2E02749DAF5338479606A056219BE109015883E5 at smail.crosswalkinc.com>
Content-Type: text/plain;	charset="us-ascii"

Is there an upper bound for UID/GIDs? 

Repeatedly I have seen GFS quota file problems when I have had UIDs
greater than 32768. Is this a limit?

-Lenny




------------------------------

Message: 12
Date: Fri, 12 Jan 2007 12:54:17 -0600
From: Abhijith Das <adas at redhat.com>
Subject: Re: [Linux-cluster] GFS UID/GID limit
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <45A7D959.6030207 at redhat.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Leonard Maiorani wrote:

>Is there an upper bound for UID/GIDs? 
>
>Repeatedly I have seen GFS quota file problems when I have had UIDs
>greater than 32768. Is this a limit?
>
>-Lenny
>
>  
>
There's a problem with the 'list' option in gfs_quota tool with large 
UID/GIDs. This problem has been fixed in RHEL4,

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=210362

GFS1 and GFS2 in RHEL5 still have this issue and a solution needs to be 
worked out.
What other problems are you referring to? A bugzilla would help.

Thanks,
--Abhi



------------------------------

Message: 13
Date: Fri, 12 Jan 2007 14:19:44 -0500
From: Andre Henry <andre at hudat.com>
Subject: Re: [Linux-cluster] ccsd problems
To: Lon Hohberger <lhh at redhat.com>
Cc: linux-cluster at redhat.com
Message-ID: <72EB7A44-EEAB-4A97-946E-B02EFC17A194 at hudat.com>
Content-Type: text/plain; charset=US-ASCII; format=flowed


On Jan 12, 2007, at 1:33 PM, Lon Hohberger wrote:

> On Fri, 2007-01-12 at 09:27 -0500, Andre Henry wrote:
>> The cluster is RHEL4. I rebooted both nodes during the maintenance
>> window and still only node one came back online. Node 2 will not even
>> start ccsd. Any ideas or debugging advice ?
>
> What does ccsd -n say?
>
> -- Lon
>

Starting ccsd 1.0.7:
Built: Jun 22 2006 18:15:41
Copyright (C) Red Hat, Inc.  2004  All rights reserved.
   No Daemon:: SET

Unable to connect to cluster infrastructure after 30 seconds.

--
Thanks
Andre





------------------------------

Message: 14
Date: Fri, 12 Jan 2007 15:14:13 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] GFS UID/GID limit
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168632853.15369.526.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 12:54 -0600, Abhijith Das wrote:
> Leonard Maiorani wrote:
> 
> >Is there an upper bound for UID/GIDs? 
> >
> >Repeatedly I have seen GFS quota file problems when I have had UIDs
> >greater than 32768. Is this a limit?
> >
> >-Lenny

> https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=210362

To clarify, it looks like this will go out in the next RHGFS update for
RHEL4.

-- Lon



------------------------------

Message: 15
Date: Fri, 12 Jan 2007 16:05:00 -0500
From: Lon Hohberger <lhh at redhat.com>
Subject: Re: [Linux-cluster] 2 missing patches in HEAD and RHEL5
	branch.	(rg_state.c and ip.sh)
To: linux clustering <linux-cluster at redhat.com>
Message-ID: <1168635900.15369.541.camel at rei.boston.devel.redhat.com>
Content-Type: text/plain

On Fri, 2007-01-12 at 14:59 +0100, Simone Gotti wrote:
> Hi all,
> 
> On a 2 node openais cman cluster, I failed a network interface and
> noticed that it didn't failed over the other node.
> 
> Looking at the rgmanager-2.0.16 code I noticed that:
> 
> handle_relocate_req is called with preferred_target = -1, but inside
> this function, there are 2 checks to see if the preferred_target is
> setted, the check is a 'if (preferred_target != 0)' so the function
> thinks that a preferred target is choosed. Then, inside the cycle, the
> only one target that really exists is "me" (as -1 isn't a real target)
> and there a "goto exausted:", the service is then restarted only on the
> locale node, where it fails again and so it's stopped. Changing these
> checks to "> 0" worked. 
> 
> Before writing a patch I noticed that in the RHEL4 CVS tag is used a
> NODE_ID_NONE instead of the numeric values, so the problem (not tested)
> probably doesn't happen.
> Is it probably a forgotten patch on HEAD and RHEL5?

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=222485

Please attach your patch if you have it; I wrote one, but yours is
already tested :)  (or you can send it here, too)


> The same problem is in the ip.sh resource scripts as it's missing the
> patch for "Fix bug in ip.sh allowing start of the IP if the link was
> down, preventing failover (linux-cluster reported)." in 1.5.2.16 of
> RHEL4 branch.

https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=222484

This one's already got a fix as you said in the RHEL4 branch; we'll use
it.

-- Lon



------------------------------

Message: 16
Date: Fri, 12 Jan 2007 15:19:15 -0600
From: "isplist at logicore.net" <isplist at logicore.net>
Subject: Re: [Linux-cluster] Can't leave cluster
To: linux-cluster <linux-cluster at redhat.com>
Message-ID: <2007112151915.382405 at leena>
Content-Type: text/plain; charset="ISO-8859-1"

> 1. I recommend "service ccsd stop" rather than killall ccsd.

This is actually my latest. While it does not *seem* to work at times, it does 
take the node out of the cluster cleanly. I say seem because it tells me that 
the node is still in the cluster yet it's not.

# more stop_gfs
service httpd stop
umount /var/www
vgchange -aln
service clvmd stop
fence_tool leave
service fenced stop
cman_tool leave
service rgmanager stop
sleep 5
service cman stop


> 2. In theory, this script should not be necessary on a RHEL, Fedora Core
> or centos box if you have your service scripts set up and chkconfig'ed on. 
> When you do /sbin/reboot, the service scripts are supposed to run in the 
> correct order and take care of all this for you.

Never had, don't know why. Always figured it was because of the way I have to 
start my nodes. I wanted to add my shutdown script into the shutdown run 
levels so that it's automatic but am not sure how to add that in.

> Shudown should take you to runlevel 6, which should run the shutdown scripts 
> in /etc/rc.d/rc6.d in the Kxx order.

Do you mean I should just copy my shutdown script into that directory?

> If there's a problem shutting down with the normal scripts, perhaps
> we need to file a bug and get the scripts changed.

Well, here is my startup script for each node, maybe the answer lies in how I 
start them?

depmod -a
modprobe dm-mod
modprobe gfs
modprobe lock_dlm

service rgmanager start
ccsd
cman_tool join -w
fence_tool join -w
clvmd
vgchange -aly
mount -t gfs /dev/VolGroup04/web /var/www/

cp -f /var/www/system/httpd.conf /etc/httpd/conf/.
cp -f /var/www/system/php.ini /etc/.
/etc/init.d/httpd start

Mike





------------------------------

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 33, Issue 24
*********************************************






More information about the Linux-cluster mailing list