[Linux-cluster] Question about cluster behavior
emmanuel segura
emi2fast at gmail.com
Fri Feb 14 17:58:51 UTC 2014
in this case your quorum should be provides 2 votes, in this case if two
nodes dies and you want to continue with just one node {1(votes per node) +
2(quorum devices) = 3 votes of 5}, more than half
2014-02-14 18:34 GMT+01:00 Digimer <lists at alteeve.ca>:
> Replies in-line:
>
>
> On 14/02/14 12:07 PM, FABIO FERRARI wrote:
>
>> So it's not a normal behavior, I guess.
>>
>> Here is my cluster.conf:
>>
>> <?xml version="1.0"?>
>> <cluster config_version="59" name="mail">
>> <clusternodes>
>> <clusternode name="eta.mngt.unimo.it" nodeid="1">
>> <fence>
>> <method name="fence-eta">
>> <device name="fence-eta"/>
>> </method>
>> </fence>
>> </clusternode>
>> <clusternode name="beta.mngt.unimo.it" nodeid="2">
>> <fence>
>> <method name="fence-beta">
>> <device name="fence-beta"/>
>> </method>
>> </fence>
>> </clusternode>
>> <clusternode name="guerro.mngt.unimo.it" nodeid="3">
>> <fence>
>> <method name="fence-guerro">
>> <device name="fence-guerro"
>> port="Guerro
>> " ssl="on" uuid="4213f370-9572-63c7-26e4-22f0f43843aa"/>
>> </method>
>> </fence>
>> </clusternode>
>> </clusternodes>
>> <cman expected_votes="5"/>
>>
>
> You generally don't need to set this, the cluster can calculate it.
>
> <quorumd label="mail-qdisk"/>
>>
>
> You don't set any votes, so the default is "1". So with expected votes
> being 5, that means all three nodes have to be up or two nodes and qdisk.
>
>
> <rm>
>> <resources>
>> <ip address="155.185.44.61/24" sleeptime="10"/>
>> <mysql config_file="/etc/my.cnf"
>> listen_address="155.185.44.61" name="mysql"
>> shutdown_wait="10" startup_wait="10"/>
>> <script file="/etc/init.d/httpd" name="httpd"/>
>> <script file="/etc/init.d/postfix"
>> name="postfix"/>
>> <script file="/etc/init.d/dovecot"
>> name="dovecot"/>
>> <fs device="/dev/mapper/mailvg-maillv"
>> force_fsck="1" force_unmount="1" fsid="58161"
>> fstype="xfs" mountpoint="/cl" name="mailvg-maill
>> v" options="defaults,noauto" self_fence="1"/>
>> <lvm lv_name="maillv" name="lvm-mailvg-maillv"
>> self_fence="1" vg_name="mailvg"/>
>> </resources>
>> <failoverdomains>
>> <failoverdomain name="mailfailoverdomain"
>> nofailback="1" ordered="1" restricted="1">
>> <failoverdomainnode
>> name="eta.mngt.unimo.it" priority="1"/>
>> <failoverdomainnode
>> name="beta.mngt.unimo.it" priority="2"/>
>> <failoverdomainnode
>> name="guerro.mngt.unimo.it" priority="3"/>
>> </failoverdomain>
>> </failoverdomains>
>> <service domain="mailfailoverdomain" max_restarts="3"
>> name="mailservices" recovery="restart"
>> restart_expire_time="600">
>> <fs ref="mailvg-maillv">
>> <ip ref="155.185.44.61/24">
>> <mysql ref="mysql">
>> <script ref="httpd"/>
>> <script ref="postfix"/>
>> <script ref="dovecot"/>
>> </mysql>
>> </ip>
>> </fs>
>> </service>
>> </rm>
>> <fencedevices>
>> <fencedevice agent="fence_ipmilan" auth="password"
>> ipaddr="155.185.135.105" lanplus="on" login="root"
>> name="fence-eta" passwd="******" pr
>> ivlvl="ADMINISTRATOR"/>
>> <fencedevice agent="fence_ipmilan" auth="password"
>> ipaddr="155.185.135.106" lanplus="on" login="root"
>> name="fence-beta" passwd="******" p
>> rivlvl="ADMINISTRATOR"/>
>> <fencedevice agent="fence_vmware_soap"
>> ipaddr="155.185.0.10" login="etabetaguerro"
>> name="fence-guerro" passwd="******"/>
>> </fencedevices>
>> </cluster>
>>
>> What log file do you need? There are many in /var/log/cluster..
>>
>
> By default, /var/log/messages is the most useful. Checking 'cman_tool
> status' and 'clustat' are also good.
>
> --
> Digimer
> Papers and Projects: https://alteeve.ca/w/
> What if the cure for cancer is trapped in the mind of a person without
> access to education?
>
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20140214/79fe8c72/attachment.htm>
More information about the Linux-cluster
mailing list