[Linux-cluster] Did you use GFS with witch technology?

Henry Robertson henry.robertson at hjrconsulting.com
Tue Jun 30 18:50:51 UTC 2009


I've also had some issues with VM fencing and multicast and VLANs.
When configured as:

VM --> switch ---> Dom0, the fencing will operate fine. (You can test by
running fence_xvmd -dddd on the dom0 box housing the VM to be fenced and run
a fence_node command from a clustered VM. You should see lots of traffic
about fence_xvm destroying the domain in question)

Adding a second stop in the middle causes an issue because the TTL of the
multicast packet is 1. (You can change the openAIS ttl default, but I never
got it to work with the firewall redirects)

VM --> Gateway VM -->Switch --> Dom0 fails. (If you do the same debug above,
you'll see no traffic because the multicast never makes it to dom0  and
fence_node will report a timeout.)

If you add an extra interface to the VM's and the Dom0's (think eth0.50) you
can also get around the multicast TTL issue. It's not preferred security
wise in some environments, though. So we went the switch route.

If the switch(es) are configured to pass multicast between interfaces
regardless, it will work. I can't remember the Cisco directive at the
moment, but I could find it if someone needs it.

What I've done in the past, is
run the VM's in VLANs which all work back to the Switch as its default
gateway (1 stop) and the switch is config'd
to pass those multicast packets to the next switch, etc, then to the dom0
VLAN which drops them off in the right place. This was specifically
necessary in one system where we had several layers of separation between
the dom0's and the domU's to properly compartment the data that was being
processed.

Best of luck!

Henry Robertson


Message: 12
> Date: Tue, 30 Jun 2009 12:26:14 -0600
> From: "Andrew A. Neuschwander" <andrew at ntsg.umt.edu>
> Subject: Re: [Linux-cluster] Did you use GFS with witch technology?
> To: linux clustering <linux-cluster at redhat.com>
> Message-ID: <4A4A58C6.50609 at ntsg.umt.edu>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> The tick divider first showed up in RHEL 5.1. It is a kernel command
> line option. Search for 'divider' in the release notes
> (
> http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/release-notes/RELEASE-NOTES-U1-x86-en.html
> ).
>
> Multicast works fine for me. I have 4 virtual and 1 physical members in
> my gfs cluster. The VMs are split onto two ESX nodes (along with 20+
> other VMs). The physical/virtual ethernet switching works fine. But I
> did make sure the GFS VMs are using the vmxnet driver, as it has a lower
> latency than the other virtual nic driver. My gfs volumes consist of
> about 14TB of 4Gbps FC SAN LUNS.
>
> -A
> --
> Andrew A. Neuschwander, RHCE
> Systems/Software Engineer
> College of Forestry and Conservation
> The University of Montana
> http://www.ntsg.umt.edu
> andrew at ntsg.umt.edu - 406.243.6310
>
>
> Tiago Cruz wrote:
> > Hello Andrew! Many thanks for your reply!
> >
> > It's very good to see an environment like my!
> >
> > I'm using RHEL 5.2 with kernel-2.6.18-92.1.22.el5... can you explain a
> > little bit around this trick divider?
> >
> > I'm usually have 2-3 IBM x3850 (16 cores CPU and 128 GB RAM) with 10-15
> > virtual machines running under GFS, with a LUN ~500 GB SAN.
> >
> > My problem happens when Multicast:
> >
> >       Switch -> GFS -> Switch = OK
> >       vSwitch (Box_A) -> Switch-> vSwitch (Box_B) = NOK
> >
> > Did you have some problem with? If I put all VMs inside the same box
> > (vSwitch Box_A -> vSwitch Box_A) I don't have any problem...
> >
> > Thanks a lot!
> >
>
>
>
> ------------------------------
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> End of Linux-cluster Digest, Vol 62, Issue 28
> *********************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20090630/820436b2/attachment.htm>


More information about the Linux-cluster mailing list