[Linux-cluster] Cluster node hangs

sachin sachinbhugra at hotmail.com
Tue Feb 15 22:24:34 UTC 2011


Sorry for the delay friends. Actually, logs are scattered in different log
files:

 

1.       For rgmamager logs I have configured /var/log/cluster.log

2.       Other cluster logs are going to messages file. Presently I am
trying to find a way using which I can gather all the logs under one file
other than messages. Seems I can use <logging> feature in cluster.conf,
comments??

 

I am having openldap logging enabled on this server which is also using
local4 facility and logs from cluster and ldap are getting mixed up.

 

 

From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of dOminic
Sent: Sunday, February 13, 2011 8:03 PM
To: linux clustering
Subject: Re: [Linux-cluster] Cluster node hangs

 

Hi,

 

Whats the msg you are getting in logs ?. It would be great if you could
attach log mesgs along with cluster.conf 

 

-dominic 

 

On Sun, Feb 13, 2011 at 3:49 PM, Sachin Bhugra <sachinbhugra at hotmail.com>
wrote:

Thank for the reply and link. However, GFS2 is not listed in fstab, it is
only handled by cluster config.

  _____  

Date: Sun, 13 Feb 2011 10:52:51 +0100
From: ekuric at redhat.com
To: linux-cluster at redhat.com
Subject: Re: [Linux-cluster] Cluster node hangs



On 02/13/2011 10:41 AM, Elvir Kuric wrote: 

On 02/13/2011 10:14 AM, Sachin Bhugra wrote: 

Hi ,

I have setup a two node cluster in lab, with Vmware Server, and hence used
manual fencing. It includes a iSCSI GFS2 partition and it service Apache in
Active/Passive mode.

Cluster works and I am able to relocate service between nodes with no
issues. However, the problem comes when I shutdown the node, for testing,
which is presently holding the service. When the node becomes unavailable,
service gets relocated and GFS partition gets mounted on the other node,
however it is not accessible. If I try to do a "ls/du" on GFS partition, the
command hangs. On the other hand the node which was shutdown gets stuck at
"unmounting file system". 

I tried using fence_manual -n nodename and then fence_ack_manual -n
nodename, however it still remains the same.

Can someone please help me is what I am doing wrong?

Thanks, 




--


Linux-cluster mailing list


Linux-cluster at redhat.com


https://www.redhat.com/mailman/listinfo/linux-cluster

It would be good to see  /etc/fstab configuration used on cluster nodes. If
/gfs partition is mounted manually it will not be unmounted correctly in
case you restart node ( and not executing umount prior restart ), and will
hang during shutdown/reboot process.

More at:
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/Glo
bal_File_System_2/index.html


Edit: above link, section 3.4 Special Considerations when Mounting GFS2 File
Systems 



Regards, 

Elvir 

 

 




--


Linux-cluster mailing list


Linux-cluster at redhat.com


https://www.redhat.com/mailman/listinfo/linux-cluster

 

-- Linux-cluster mailing list Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster 


--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20110216/37af12f5/attachment.htm>


More information about the Linux-cluster mailing list