R: [Linux-cluster] Cluster Centos - Don't switch resource
mulach
mulach at libero.it
Mon Oct 6 17:04:23 UTC 2008
It works fine, but i have a doubt: I must install heartbeat? If not in wich
case i do it?
Tnks
-----Messaggio originale-----
Da: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] Per conto di Xavier Montagutelli
Inviato: lunedì 22 settembre 2008 17.31
A: linux clustering
Oggetto: Re: [Linux-cluster] Cluster Centos - Don't switch resource
On Monday 22 September 2008 08:56, mulach at libero.it wrote:
> Hi,
>
> in Centos 5.2 i had create a cluster with Conga(two node has been in
vmware
> server). The problem is that when a node fail, don't switch the service.
> Below the cluster.conf
>
> --------------------------------
> <?xml version="1.0"?>
> <cluster alias="SecondCluster" config_version="23" name="SecondCluster">
> <fence_daemon clean_start="0" post_fail_delay="0"
post_join_delay="3"/>
> <clusternodes>
> <clusternode name="clu1.localdomain" nodeid="1" votes="1">
> <fence>
> <method name="1"/>
> </fence>
> </clusternode>
> <clusternode name="clu2.localdomain" nodeid="2" votes="1">
> <fence>
> <method name="1"/>
> </fence>
> </clusternode>
> </clusternodes>
If I understand correctly your cluster.conf file, you don't have fencing
devices for your nodes. In my tests, I **had** to define fencing methods for
the service to switch when one node fails. Otherwise, it doesn't work !
You can try configuring a "fence_manual" method (just for testing). After
clu1
failure, clu2 should "fence" it. You then have to use
the "fence_ack_manual -n clu1.localdomain" command to confirm manually that
the fencing is done.
http://sources.redhat.com/cluster/wiki/FAQ/Fencing#fence_manual2
In my cluster.conf, it looks like :
<clusternode name="clu1.localdomain" nodeid="1" votes="1">
<fence>
<method name="1"/>
<device name="manual_1" nodename="clu1.localdomain"/>
</fence>
</clusternode>
(same for clu2)
<fencedevices>
<fencedevice agent="fence_manual" name="manual_2"/>
<fencedevice agent="fence_manual" name="manual_1"/>
</fencedevices>
Does it solve your pb ?
> <cman expected_votes="1" two_node="1"/>
> <fencedevices/>
> <rm>
> <failoverdomains>
> <failoverdomain name="fail" ordered="1"
restricted="1">
> <failoverdomainnode name="clu1.localdomain"
priority="1"/>
> <failoverdomainnode name="clu2.localdomain"
priority="2"/>
> </failoverdomain>
> </failoverdomains>
> <resources>
> <ip address="192.168.80.201" monitor_link="1"/>
> <fs device="/dev/sdb1" force_fsck="0"
force_unmount="0" fsid="45662"
> fstype="ext3" mountpoint="/mnt/sdc" name="Share" options=""
> self_fence="0"/> </resources>
> <service autostart="1" domain="fail" exclusive="1"
name="IPS"
> recovery="restart"> <ip ref="192.168.80.201"/>
> <fs ref="Share"/>
> </service>
> </rm>
> <fence_xvmd/>
> </cluster>
>
> --------------------------------
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
--
Xavier Montagutelli Tel : +33 (0)5 55 45 77 20
Service Commun Informatique Fax : +33 (0)5 55 45 75 95
Universite de Limoges
123, avenue Albert Thomas
87060 Limoges cedex
--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
No virus found in this incoming message.
Checked by AVG - http://www.avg.com
Version: 8.0.169 / Virus Database: 270.7.0/1683 - Release Date: 21/09/2008
10.10
More information about the Linux-cluster
mailing list