[Linux-cluster] Linux-cluster Digest, Vol 72, Issue 13

rajatjpatel rajatjpatel at gmail.com
Tue Apr 13 17:36:49 UTC 2010


Regards,

Rajat J Patel

FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


   1. Configuration httpd service in linux cluster (Srija)
>
Hi Srija,

Which hardware are you using for cluster following link will help you setup
cluster HA

http://studyhat.blogspot.com/2009/11/clustering-linux-ha.html
http://studyhat.blogspot.com/2010/01/cluster-hp-ilo.html
what i suggest you just follow the 2nd link for setting up bond0 bond1 and
then you setup your cluster



 2. Re: Configuration httpd service in linux cluster (Paul M. Dyer)
  3. [solved] Re: ccsd not starting (Bachman Kharazmi)
  4. Re: iscsi qdisk failure cause reboot (jose nuno neto)
  5. Re: ocf_log (C. Handel)


----------------------------------------------------------------------

Message: 1
Date: Mon, 12 Apr 2010 14:57:10 -0700 (PDT)
From: Srija <swap_project at yahoo.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: [Linux-cluster] Configuration httpd service in linux cluster
Message-ID: <37531.56590.qm at web112802.mail.gq1.yahoo.com>
Content-Type: text/plain; charset=us-ascii

 Hi,

 I am trying to configure  httpd service  in my 3 nodes cluster environment.
(RHEL5.4 86_64). I am new to the cluster configuration.

 I have  followed the document as follows:

http://www.linuxtopia.org/online_books/linux_system_administration/redhat_cluster_configuration_and_management/s1-apache-inshttpd.html

But somehow it is not working. The node i am assiging is getting fenced.
Sometimes the sever getting hung at the starting of  clvmd.

For this configuration I have kept  my data in a lvm partition , and
this  partition I am using as the httpd  content .

Also I am not understanding which IP i will assign for this service.

I configured  first with  public ip of the server,  it did not work.
Then i configured with the private IP, it did not work too.

If anybody guides  me with a documentation which I can understand and
follow, it will be really appreciated.

Thanks in advance.






------------------------------

Message: 2
Date: Mon, 12 Apr 2010 17:22:57 -0500 (CDT)
From: "Paul M. Dyer" <pmdyer at ctgcentral2.com>
To: linux clustering <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] Configuration httpd service in linux
       cluster
Message-ID: <35054.21271110977480.JavaMail.root at athena>
Content-Type: text/plain; charset=utf-8

Configure a unique IP address, different from the public or private
addresses already used.   Probably, you want the Apache IP to be on the
public subnet.   The Apache IP address will move to different nodes of the
cluster along with the Apache service.

Paul

----- Original Message -----
From: "Srija" <swap_project at yahoo.com>
To: "linux clustering" <linux-cluster at redhat.com>
Sent: Monday, April 12, 2010 4:57:10 PM (GMT-0600) America/Chicago
Subject: [Linux-cluster] Configuration httpd service in linux cluster

 Hi,

 I am trying to configure  httpd service  in my 3 nodes cluster environment.
(RHEL5.4 86_64). I am new to the cluster configuration.

 I have  followed the document as follows:

http://www.linuxtopia.org/online_books/linux_system_administration/redhat_cluster_configuration_and_management/s1-apache-inshttpd.html

But somehow it is not working. The node i am assiging is getting fenced.
Sometimes the sever getting hung at the starting of  clvmd.

For this configuration I have kept  my data in a lvm partition , and
this  partition I am using as the httpd  content .

Also I am not understanding which IP i will assign for this service.

I configured  first with  public ip of the server,  it did not work.
Then i configured with the private IP, it did not work too.

If anybody guides  me with a documentation which I can understand and
follow, it will be really appreciated.

Thanks in advance.




--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster



------------------------------

Message: 3
Date: Tue, 13 Apr 2010 00:12:14 +0200
From: Bachman Kharazmi <bahkha at gmail.com>
To: linux-cluster at redhat.com
Subject: [Linux-cluster] [solved] Re: ccsd not starting
Message-ID:
       <h2o1ce16a2c1004121512t6392ec40s9b2db92722144019 at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

I had disabled the ipv6 module in Debian which caused that ccsd could
not start. The default settings in Debian is no start argument, which
means both ipv4 and ipv6 enabled, and the current stable ccsd in Lenny
cannot start if ipv6 is disabled in OS and no start argument "-4"
specified. From what I have heard the official Lenny packages are old
(cman 2.20081102-1+lenny1). Unfortunately the ccsd did not log to
messages about the missing support on start-up, but /usr/sbin/ccsd -n
did print to stdout.


On 12 April 2010 13:09, Bachman Kharazmi <bahkha at gmail.com> wrote:
> Hi
> I'm running Debail Lenny where packages: gfs2-tools and
> redhat-cluster-suite are installed.
> When I do /etc/init.d/cman start ? I get:
> Starting cluster manager:
> ?Loading kernel modules: done
> ?Mounting config filesystem: done
> ?Starting cluster configuration system: done
> ?Joining cluster:cman_tool: ccsd is not running
>
> ?done
> ?Starting daemons: groupd fenced dlm_controld gfs_controld
> ?Joining fence domain:fence_tool: can't communicate with fenced -1
> ?done
> ?Starting Quorum Disk daemon: done
>
> ccsd doesn't run, and that is the reason why fence cannot communicate?
>
> My cluster.conf and /etc/default/cman looks like:
>
> web3:~# cat /etc/default/cman
> CLUSTERNAME="cluster"
> NODENAME="web3"
> USE_CCS="yes"
> CLUSTER_JOIN_TIMEOUT=300
> CLUSTER_JOIN_OPTIONS=""
> CLUSTER_SHUTDOWN_TIMEOUT=60
>
> web3:~# cat /etc/cluster/cluster.conf
> <?xml version="1.0"?>
> <cluster name="cluster" config_version="1">
>
> <cman two_node="0" expected_votes="1"> </cman>
>
> <clusternodes>
> <clusternode name="web1" nodeid="1">
> ? ? ? ?<fence>
> ? ? ? ? ? ? ? ?<method name="single">
> ? ? ? ? ? ? ? ? ? ? ? ?<device name="manual" ipaddr="192.168.99.30"/>
> ? ? ? ? ? ? ? ?</method>
> ? ? ? ?</fence>
> </clusternode>
>
> <clusternode name="web2" nodeid="2">
> ? ? ? ?<fence>
> ? ? ? ? ? ? ? ?<method name="single">
> ? ? ? ? ? ? ? ? ? ? ? ?<device name="manual" ipaddr="192.168.99.40"/>
> ? ? ? ? ? ? ? ?</method>
> ? ? ? ?</fence>
> </clusternode>
>
> <clusternode name="web3" nodeid="3">
> ? ? ? ?<fence>
> ? ? ? ? ? ? ? ?<method name="single">
> ? ? ? ? ? ? ? ? ? ? ? ?<device name="manual" ipaddr="192.168.99.50"/>
> ? ? ? ? ? ? ? ?</method>
> ? ? ? ?</fence>
> </clusternode>
> </clusternodes>
>
> <fencedevices>
> ? ? ? ?<fencedevice name="manual" agent="fence_manual"/>
> </fencedevices>
> </cluster>
>
> web3:~# /usr/sbin/ccsd
> web3:~# ps ax | grep ccsd
> 11935 pts/0 ? ?S+ ? ? 0:00 grep ccsd
>
> strace /usr/sbin/ccsd output: http://pastebin.ca/1859435
> the process seem to die after reading the cluster.conf
>
> I have a iscsi block device /dev/sda available at three initiators.
>
> gfs2 fs is created using:
> mkfs.gfs2 -t cluster:share1 -p lock_dlm -j 4 /dev/sda1
>
> kernel is default: 2.6.26-2-amd64
> Have I missed anything to make the cman startup work properly?
>



------------------------------

Message: 4
Date: Tue, 13 Apr 2010 09:07:00 -0000 (GMT)
From: "jose nuno neto" <jose.neto at liber4e.com>
To: "linux clustering" <linux-cluster at redhat.com>
Subject: Re: [Linux-cluster] iscsi qdisk failure cause reboot
Message-ID:
       <9c3d680350c0de020103c8a6110e4e05.squirrel at fela.liber4e.com>
Content-Type: text/plain;charset=iso-8859-1

Hi Brem

I've tried the max_error_cycles setting and it fix this behavior.
Thanks a bunch
It seems we're on the same path here.... I'm almost finished :-)
SeeYou
Jose

> Hi Jose,
>
> check out the logs of the other nodes (the ones that remained alive)
> to see if you don't have a message telling you that  the node was
> killed "because it has rejoined the cluster with existing state"
>
> Also, you could add a max_error_cycles="your value" to your <quorumd
> device..../> in order to make qdisk exit after "your value" missed
> cycles.
>
> I have posted a message a few times ago about this feature
> 'max_error_cycles' not working, but I was wrong....Thx Lon
>
> If your quorum device is multipathed, make sure you don't queue
> (no_path_retry queue) as it won't generate an ioerror to the upper
> layer (qdisk) and that the number of retries isn't higher than your
> qdisk interval (in my setup, no_path_retry fail, which means immediate
> ioerror).
>
> Brem
>
>
>  2010/4/12 jose nuno neto <jose.neto at liber4e.com>:
>> Hi2All
>>
>> I have the following setup:
>> .2node + qdisk ( iscsi w/ 2network paths and multipath )
>>
>> on qkdisk I have allow_kill=0 and reboot=1 since I have some heuristics
>> and want to force some switching on network events
>>
>> the issue I'm facing now is that on iscsi problems on 1node ( network
>> down
>> for ex )
>> I have no impact on cluster ( witch is ok for me ) but at recovery the
>> node gets rebooted ( not fenced by the other node )
>>
>> If on iscsi going down, I do a qdisk stop, then iscsi recover, then
>> qdisk
>> start I get no reboot
>>
>> Is this proper qdisk behavior? It keeps track of some error and forces
>> reboot?
>>
>> Thanks
>> Jose
>>
>>
>>
>>
>>
>>
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster at redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



------------------------------

Message: 5
Date: Tue, 13 Apr 2010 12:02:50 +0200
From: "C. Handel" <christoph at macht-blau.org>
To: linux-cluster at redhat.com
Subject: Re: [Linux-cluster] ocf_log
Message-ID:
       <m2he01041c1004130302medc93b79u5548d9d4485b0e4e at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Apr 1, 2010 at 6:00 PM,  <linux-cluster-request at redhat.com> wrote:

>>> i'm writing a custom resource agent. In the resource agent i try to
>>> use the ocf_log funtions but they don't work as expected. When i run
>>> the rgmanager in the foreground (clurgmgrd -df) i get all the message
>>> i want. When running as a normal daemon i can't find my log entries.


>> Have you defined a syslog.conf entry for your local4 facility ?


> yes. Messages from logger (which uses the same facility as rm) and
> debug messages from the ip resource agent show up.

To complete this question for the archives.

i missed sbin in my PATH environment variable.

Resource Agents (when called from clumgrd) have a default path without
sbin. At the beginning of my agent i also included sbin, but i didn't
export it. When calling ocf_log, the actual log is done by a call to
"clulog" which is in sbin. As the path of the included shellscript is
not changed, it is not found.

so the beginning of the resource agent is

LC_ALL=C
LANG=C
PATH=/bin:/sbin:/usr/bin:/usr/sbin
# remember to export path, so ocf_log function can find its clulog binary
export LC_ALL LANG PATH

. $(dirname $0)/ocf-shellfuncs




Greetings
  Christoph



------------------------------

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 72, Issue 13
*********************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100413/e20d88ea/attachment.htm>


More information about the Linux-cluster mailing list