[Linux-cluster] Multicasting problems

Alan A alan.zg at gmail.com
Wed Sep 9 18:38:03 UTC 2009


Haven't done that but I am not positive that it would help in our setting.
My tests were to establish private VLAN with 3 private addresses for 3 node
cluster. I hade node1 on 192.168.10.21, node2 192.168.10.22, and node3 on
192.168.10.23.
I could ping each node from each node, so node1 would see node2 and node3,
node2 would see node1 and node3, and node3 would see node1 and node2. I made
/etc/host entries and checked with the 'route' command that the device eth2
on each node was dedicated to access private network on 192.168.10.2x, as it
showed.
There was no additional network devise on the Cisco switch, just the 3
cluster nodes. I issued cman_tool status command and got the multicast
address - checked that it is the same on all three nodes and when I pinged
the address I just got the dead air..... Nothing...
I tried this by forcing cluster via sysclt command to use IGMPv1 v2 and
v3... None worked.

On Wed, Sep 9, 2009 at 1:22 PM, Luis Cerezo <Luis.Cerezo at pgs.com> wrote:

> this may be completely unhelpful...
>
> have you tried changing the mcast address of the cluster?
>
> Luis E. Cerezo
> Global IT
> GV: +1 412 223 7396
>
> On Sep 9, 2009, at 7:32 AM, Alan A wrote:
>
> The problem lays in creating the VLAN that allows PIM. Firewall and the
> switch are one physical device, and once the firewall is on it manages
> directly ports on the switch, and firewall is not capable (according to our
> LAN /WAN engineers) at least not on this Cisco model of managing or allowing
> PIM. For PIM we need other dedicated device that would handle Sparse/Dense
> mode before the firewall, which is a major problem. That is why I am
> interested in what can be done on the server side, what options can we
> enable on the NIC's directly to mimic PIM. Switch will allow IGMPv2
> communication, but in our tests without Router like device with PIM enabled,
> we were unable to form the cluster. Each node woud send IGMP messages and it
> would be totally unaware of other nodes sending their messages.
>
> On Wed, Sep 9, 2009 at 4:08 AM, Jakov Sosic <jakov.sosic at srce.hr<mailto:
> jakov.sosic at srce.hr>> wrote:
> On Tue, 8 Sep 2009 17:34:11 -0500
> Alan A <alan.zg at gmail.com<mailto:alan.zg at gmail.com>> wrote:
>
> > It has come to the point where our cluster production configuration
> > has halted due to the unexpected issues with multicasting on LAN/WAN.
> >
> > The problem is that the firewall enabled on the switch ports does not
> > support multicasting, and between cluster nodes and the routers lays
> > firewall.
> >
> > Nodes -> Switch with integrated Firewall devices -> Router
> >
> > We are aware of problems encountered with Cisco switches and are
> > trying to clear some things. For instance in RHEL Knowledgebase
> > article 5933 it states:
> >
> > *The recommended method is to enable multicast routing for a given
> > vlan so that the Catalyst will act as the IGMP querier. This consists
> > of the following steps:*
> >
> > * *
> >
> >    1.
> >
> >    *Enabling multicast on the switch globally*
> >    2.
> >
> >    *Choosing the vlan the cluster nodes are using*
> >    3.
> >
> >    *Turning on PIM routing for that subnet*
> >
> >
> > My Questions:
> >
> > Can we enable PIM routing on the Server NIC itself without using
> > dedicated network device? Meaning IGMP multicast would be managed by
> > the NIC's itself from each node, can the nodes awarnes function this
> > way?
> >
> > Any suggestions on how to get around firewall issue without purchesing
> > firewalls with routing tables?
> >
> > Cisco switch model is: switch 6509 running 12.2(18) SXF and IGMP v2.
>
> It seems that I was right with my diagnostics :D
>
>
> Why don't you create VLAN with private subnet addresses, in for example
> 10.0.0.0/8<http://10.0.0.0/8>, and allow PIM on that VLAN, and trunk it
> with regular
> wlan that you use now. And then configure RHCS to heartbeat over
> this new private VLAN with enabled PIM? You wouldn't need the firewall
> because the VLAN would be used only for cluster communication, and it
> could be fully isolated. It does not need to be routed at all - because
> heartbeat packages go only between nodes. So no external access to that
> VLAN would be enabled. It's perfectly safe.
>
> If you need help on configuring either Cisco 6500 or RHEL for VLAN
> trunking please ask. Take a look at 802.1Q standard to understand the
> issue:
>
> http://en.wikipedia.org/wiki/IEEE_802.1Q
>
>
>
> --
> |    Jakov Sosic    |    ICQ: 28410271    |   PGP: 0x965CAE2D   |
> =================================================================
> | start fighting cancer -> http://www.worldcommunitygrid.org/   |
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com<mailto:Linux-cluster at redhat.com>
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
> --
> Alan A.
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com<mailto:Linux-cluster at redhat.com>
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
> This e-mail, including any attachments and response string, may contain
> proprietary information which is confidential and may be legally privileged.
> It is for the intended recipient only. If you are not the intended recipient
> or transmission error has misdirected this e-mail, please notify the author
> by return e-mail and delete this message and any attachment immediately. If
> you are not the intended recipient you must not use, disclose, distribute,
> forward, copy, print or rely on this e-mail in any way except as permitted
> by the author.
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
Alan A.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090909/be735391/attachment.htm>


More information about the Linux-cluster mailing list