I think you have fencing-race problem, try to look man fence_drac6<br><br>try to delay fencing on a node when have problem with the cluster network<br><br>=======================================<br> --delay<br> Wait X seconds before fencing is started (Default Value: 0)<br>
========================================<br><br>And i see you don't have a quorum disk, using qdisk for redhat it's always a good idea<br><div class="gmail_quote">2012/1/25 jayesh.shinde <span dir="ltr"><<a href="mailto:jayesh.shinde@netcore.co.in">jayesh.shinde@netcore.co.in</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>
<div bgcolor="#ffffff" text="#000000">
<font face="Times New Roman, Times, serif">Dear Emmanuel Segura, <br>
<br>
Find the config below. Because of policy I have removed some
login details.<br>
<br>
#############<br>
<br>
<?xml version="1.0"?><br>
<cluster config_version="6" name="new_cluster"><br>
<fence_daemon post_fail_delay="0"
post_join_delay="3"/><br>
<clusternodes><br>
<clusternode name="mailbox1" nodeid="1"
votes="1"><br>
<multicast addr="224.0.0.1"
interface="bond0"/><br>
<fence><br>
<method name="1"><br>
<device
name="imap1drac"/><br>
</method><br>
</fence><br>
</clusternode><br>
<clusternode name="mailbox2" nodeid="2"
votes="1"><br>
<multicast addr="224.0.0.1"
interface="bond0"/><br>
<fence><br>
<method name="1"><br>
<device
name="imap2drac"/><br>
</method><br>
</fence><br>
</clusternode><br>
</clusternodes><br>
<cman expected_votes="1" two_node="1"><br>
<multicast addr="224.0.0.1"/><br>
</cman><br>
<fencedevices><br>
<fencedevice agent="fence_drac6"
ipaddr="<drac IP>" login="<login name>"
name="imap1drac" passwd="xxxxx"/><br>
<fencedevice agent="fence_drac6"
ipaddr="<drac IP>" login="<login name>"
name="imap2drac" passwd="xxxxx"/><br>
</fencedevices><br>
<rm><br>
<failoverdomains/><br>
<resources><br>
<ip address="192.168.1.1"
monitor_link="1"/><br>
<fs device="/dev/drbd0" force_fsck="0"
force_unmount="1" fsid="28418" fstype="ext3"
mountpoint="/mount/path" name="imap1_fs" options="rw"
self_fence="1"/><br>
<script file="/etc/init.d/cyrus-imapd"
name="imap1_init"/><br>
</resources><br>
<service autostart="1" name="imap1"
recovery="restart"><br>
<ip ref="192.168.1.1"/><br>
<fs ref="imap1_fs"/><br>
<script ref="imap1_init"/><br>
</service><br>
</rm><br>
</cluster><br>
###################<br>
<br>
Regards<br>
Jayesh Shinde<br>
</font><br>
<br>
On 01/25/2012 01:59 PM, emmanuel segura wrote:
<blockquote type="cite">Can you show me your cluster config?<br>
<br>
<div class="gmail_quote">2012/1/25 jayesh.shinde <span dir="ltr"><<a href="mailto:jayesh.shinde@netcore.co.in" target="_blank">jayesh.shinde@netcore.co.in</a>></span><br>
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff" text="#000000"> <font face="Times New
Roman, Times, serif">Hi all , <br>
<br>
I have few queries about fence working. <br>
<br>
I am using 2 different the 2 node cluster with Dell and
IBM hardware in two different IDC.<br>
Recently I came across the network failure problem at
different time and I found my 2 nodes are power off state.<br>
<br>
Below is how the situation happened with my 2 different 2
node cluster. <br>
<br>
With 2 node </font><font face="Times New Roman, Times,
serif"> IBM </font><font face="Times New Roman, Times,
serif">node cluster with SAN :--<br>
==============================<br>
1) Network connectivity was failed totally for few
minutes. <br>
2) And as per the /var/log/messages both servers failed
to fence to each other and both server was UP as it is
with all services. <br>
3) But the "clustat" was showing serves are not in cluster
mode and "regmanger" status was stop.<br>
4) I simply reboot the server. <br>
5) After that I found both server in power off stat. <br>
<br>
<br>
with another 2 node Dell server with DRBD :--<br>
=================================<br>
1) </font><font face="Times New Roman, Times, serif">Network
connectivity was failed totally. <br>
2) DRAC ip was unavailable so fence failed from both
server. <br>
3) after some time I fond the servers are shutdown.<br>
</font><font face="Times New Roman, Times, serif"><br>
In normal conditions both cluster work properly <br>
<br>
</font><font face="Times New Roman, Times, serif"> my
queries are now :--<br>
===============<br>
1) What could be the reason for power off ? <br>
2) Does cluster's fencing method caused for the power
off of server ( i.e because of previous failed fence ) ?<br>
3) Is there any test cases mentioned on net / blog / wiki
about the fence , i.e different situation under which
fence works.<br>
<br>
Please guide. <br>
<br>
Thanks & Regards<span><font color="#888888"><br>
Jayesh Shinde<br>
<br>
</font></span></font><font face="Times New Roman, Times,
serif"><br>
<br>
</font> </div>
<br>
--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
</blockquote>
</div>
<br>
<br clear="all"><span class="HOEnZb"><font color="#888888">
<br>
-- <br>
esta es mi vida e me la vivo hasta que dios quiera<br>
<pre><fieldset></fieldset>
--
Linux-cluster mailing list
<a href="mailto:Linux-cluster@redhat.com" target="_blank">Linux-cluster@redhat.com</a>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a></pre>
</font></span></blockquote>
<br>
</div>
<br>--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>