[Linux-cluster] Two node cluster, start CMAN fence the other node

Jankowski, Chris Chris.Jankowski at hp.com
Mon Apr 19 14:26:23 UTC 2010


Celso,

What would this limit buy you?

Honestly, I cannot see any rationale for this limit on a modern multicore, multi CPU server.  Note that there is a real *physical* limit that will activate as the load increases and that is one core being fully utilised by gfs control daemon.  Even with this core saturated by the gfs control daemon the cluster works just fine and processes > 5,000 IO/s.

The limit made sense when you had 300 MHz single core, single CPU servers.  You could kill a server, if you were not careful.

Regards,

Chris Jankowski

________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Celso K. Webber
Sent: Monday, 19 April 2010 22:44
To: linux clustering
Subject: Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

Hello Chris,

Regarding number 2 below, in fact the default limits are "100", so Alex's configuration of "500" should increase the performance.

Of couse there can be configured for unlimited, but this is something one should decide by him(her)self according to the environment.

Please see this link:
http://www.linuxdynasty.org/howto-increase-gfs2-performance-in-a-cluster.html

Regards,

Celso.


________________________________
From: "Jankowski, Chris" <Chris.Jankowski at hp.com>
To: linux clustering <linux-cluster at redhat.com>
Sent: Fri, April 16, 2010 9:39:54 PM
Subject: Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

Alex,

1.
Thank you very much.
The Cisco setup is very useful and the commands for testing multicast as well.

2
Loking at your cluster.conf, I would have thought that any limits on dlm and gfs lock rates are counterproductive in the days of multicore CPUs and GbE.  They should be unlimited in my opinion.  Under high load the limiting factor will be saturation of one core by gfs control daemon.

Regards,

Chris Jankowski

________________________________
From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Alex Re
Sent: Saturday, 17 April 2010 01:36
To: linux clustering
Subject: Re: [Linux-cluster] Two node cluster, start CMAN fence the other node

Hi Chris,

for the switches ports stuff check out this url:
http://www.openais.org/doku.php?id=faq:cisco_switches

We have finally configured an internal (private) VLAN joining there one NIC of each blade server. Now all cluster related traffic goes through those interfaces (eth2 at both servers in our case), including the traffic generated by the lock_dlm of the GFS2 filesystem, just created.

To check multicast connectivity, these are two very useful commands, "nc -u -vvn -z <multicast_IP> 5405" to generate some multicast udp traffic and "tcpdump -i eth2 ether multicast" to check it from the other node. (eth2 in my particular case, of course).

I have been playing a little with the lock_dlm, but here is how my cluster.conf looks now:

<?xml version="1.0"?>
<cluster config_version="7" name="VCluster">
    <fence_daemon post_fail_delay="0" post_join_delay="25"/>
    <clusternodes>
        <clusternode name="nodeaint" nodeid="1" votes="1">
            <multicast addr="239.0.0.1" interface="eth2"/>
            <fence>
                <method name="1">
                    <device name="nodeaiLO"/>
                </method>
            </fence>
        </clusternode>
        <clusternode name="nodebint" nodeid="2" votes="1">
            <multicast addr="239.0.0.1" interface="eth2"/>
            <fence>
                <method name="1">
                    <device name="nodebiLO"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1">
        <multicast addr="239.0.0.1"/>
    </cman>
    <fencedevices>
        <fencedevice agent="fence_ilo" hostname="nodeacn" login="user" name="nodeaiLO" passwd="hp"/>
        <fencedevice agent="fence_ilo" hostname="nodebcn" login="user" name="nodebiLO" passwd="hp"/>
    </fencedevices>
    <rm>
        <failoverdomains/>
        <resources/>
    </rm>
    <dlm plock_ownership="1" plock_rate_limit="500"/>
    <gfs_controld plock_rate_limit="500"/>
</cluster>

Next thing to add... I'm going to play a little with the quorum devices.

Hope it helps!

Alex

On 04/16/2010 05:00 PM, Jankowski, Chris wrote:
eparate the cluster interconne

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20100419/a170a68e/attachment.htm>


More information about the Linux-cluster mailing list