[Linux-cluster] corosync issue with two interface directives

Ben Shepherd bshepherd at voxeo.com
Sun Feb 5 19:35:21 UTC 2012


Hi, 

OK so how does that affect the fail over. Each f the networks is important
if we lose ring 0 or ring 1 we need to fail over.

If I have the config stated below:
# Please read the corosync.conf.5 manual page
compatibility: whitetank

totem {
version: 2
secauth: on
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.251.96.160
#broadcast: yes
mcastaddr: 239.254.6.8
                mcastport: 5405
ttl: 1
}
interface {
ringnumber: 1
bindnetaddr: 10.122.147.192
#broadcast: yes
mcastaddr: 239.254.6.9
                mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

And I pull out the cable for the interface on ring 1 will it fail over ? Or
will it use ring 1 only if ring 0 fails.

I read the documentation but it is less than clear :-)

I would just do it and pull the cable out but sadly it requires me to fly to
Vienna to do it seems a little extravagant.

From:  emmanuel segura <emi2fast at gmail.com>
Reply-To:  linux clustering <linux-cluster at redhat.com>
Date:  Sun, 5 Feb 2012 20:14:14 +0100
To:  linux clustering <linux-cluster at redhat.com>
Subject:  Re: [Linux-cluster] corosync issue with two interface directives

I think the ringnumber must be diferent for every network

2012/2/5 Ben Shepherd <bshepherd at voxeo.com>
> Currently have a 2 node cluster. We configured HA on 1 network to take inbound
> traffic with multicast in corosync  and 1 VIP.
> 
> This works fine (most of the time sometimes if you take the cable out both
> interfaces end up with the VIP but that is another story)
> Customer now has another network on which they want to take traffic. I have
> assigned the VIP on
> 
> node lxnivrr45.at.inside
> node lxnivrr46.at.inside
> primitive failover-ip1 ocf:heartbeat:IPaddr
> params ip=" 10.251.96.185"
> op monitor interval="10s"
>  primitive failover-ip2 ocf:heartbeat:IPaddr
> params ip="10.2.150.201"
> op monitor interval="10s"
> colocation failover-ips inf: failover-ip1 failover-ip2
> property $id="cib-bootstrap-options"
> dc-version="1.1.5-5.el6-01e86afaaa6d4a8c4836f68df80ababd6ca3902f"
> cluster-infrastructure="openais"
> expected-quorum-votes="2"
> no-quorum-policy="ignore"
> stonith-enabled="false"
> rsc_defaults $id="rsc-options"
> resource-stickiness="100"
> 
> Current Corosync configuration is:
> 
> # Please read the corosync.conf.5 manual page
> compatibility: whitetank
> 
> totem {
> version: 2
> secauth: on
> threads: 0
> interface {
> ringnumber: 0
> bindnetaddr: 10.251.96.160
> #broadcast: yes
> mcastaddr: 239.254.6.8
>                 mcastport: 5405
> ttl: 1
> }
> }
> 
> logging {
> fileline: off
> to_stderr: no
> to_logfile: yes
> to_syslog: yes
> logfile: /var/log/cluster/corosync.log
> debug: off
> timestamp: on
> logger_subsys {
> subsys: AMF
> debug: off
> }
> }
> 
> amf {
> mode: disabled
> }
> 
> I am a little confused about using. Should I add the Multicast address for the
> 2nd Network as ring 1 or can I have 2 Interfaces on ring 0 on different
> networks ? 
> 
> Giving me:
> 
> # Please read the corosync.conf.5 manual page
> compatibility: whitetank
> 
> totem {
> version: 2
> secauth: on
> threads: 0
> interface {
> ringnumber: 0
> bindnetaddr: 10.251.96.160
> #broadcast: yes
> mcastaddr: 239.254.6.8
>                 mcastport: 5405
> ttl: 1
> }
> interface {
> ringnumber: 0
> bindnetaddr: 10.122.147.192
> #broadcast: yes
> mcastaddr: 239.254.6.9
>                 mcastport: 5405
> ttl: 1
> }
> }
> 
> logging {
> fileline: off
> to_stderr: no
> to_logfile: yes
> to_syslog: yes
> logfile: /var/log/cluster/corosync.log
> debug: off
> timestamp: on
> logger_subsys {
> subsys: AMF
> debug: off
> }
> }
> 
> amf {
> mode: disabled
> }
> 
> Just need to make sure that if I lose either of the interfaces they VIP's fail
> over.
> 
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster



-- 
esta es mi vida e me la vivo hasta que dios quiera
-- Linux-cluster mailing list Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120205/30da9a16/attachment.htm>


More information about the Linux-cluster mailing list