<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.6000.17023" name=GENERATOR></HEAD>
<BODY text=#000000 bgColor=#ffffff>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2>Alex,</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2></FONT></SPAN> </DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2>1.</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff size=2>Thank
you very much.</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff size=2>The
Cisco setup is very useful and the commands for testing multicast as
well.</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2></FONT></SPAN> </DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2>2</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff size=2>Loking
at your cluster.conf, I would have thought that any limits on dlm and gfs lock
rates are counterproductive in the days of multicore CPUs and GbE. They
should be unlimited in my opinion. Under high load the limiting factor
will be saturation of one core by gfs control daemon. </FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2></FONT></SPAN> </DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2>Regards,</FONT></SPAN></DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff
size=2></FONT></SPAN> </DIV>
<DIV><SPAN class=972203300-17042010><FONT face=Arial color=#0000ff size=2>Chris
Jankowski</FONT></SPAN></DIV><BR>
<BLOCKQUOTE dir=ltr style="MARGIN-RIGHT: 0px">
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> linux-cluster-bounces@redhat.com
[mailto:linux-cluster-bounces@redhat.com] <B>On Behalf Of </B>Alex
Re<BR><B>Sent:</B> Saturday, 17 April 2010 01:36<BR><B>To:</B> linux
clustering<BR><B>Subject:</B> Re: [Linux-cluster] Two node cluster, start CMAN
fence the other node<BR></FONT><BR></DIV>
<DIV></DIV>Hi Chris,<BR><BR>for the switches ports stuff check out this
url:<BR><A class=moz-txt-link-freetext
href="http://www.openais.org/doku.php?id=faq:cisco_switches">http://www.openais.org/doku.php?id=faq:cisco_switches</A><BR><BR>We
have finally configured an internal (private) VLAN joining there one NIC of
each blade server. Now all cluster related traffic goes through those
interfaces (eth2 at both servers in our case), including the traffic generated
by the lock_dlm of the GFS2 filesystem, just created. <BR><BR>To check
multicast connectivity, these are two very useful commands, "nc -u -vvn -z
<multicast_IP> 5405" to generate some multicast udp traffic and "tcpdump
-i eth2 ether multicast" to check it from the other node. (eth2 in my
particular case, of course).<BR><BR>I have been playing a little with the
lock_dlm, but here is how my cluster.conf looks now:<BR><BR><SMALL><FONT
face="Courier New, Courier, monospace"><?xml
version="1.0"?><BR><cluster config_version="7"
name="VCluster"><BR> <fence_daemon post_fail_delay="0"
post_join_delay="25"/><BR>
<clusternodes><BR> <clusternode
name="nodeaint" nodeid="1" votes="1"><BR>
<multicast addr="239.0.0.1"
interface="eth2"/><BR>
<fence><BR>
<method
name="1"><BR>
<device
name="nodeaiLO"/><BR>
</method><BR>
</fence><BR>
</clusternode><BR>
<clusternode name="nodebint" nodeid="2"
votes="1"><BR>
<multicast addr="239.0.0.1" interface="eth2"/><BR>
<fence><BR>
<method
name="1"><BR>
<device
name="nodebiLO"/><BR>
</method><BR>
</fence><BR>
</clusternode><BR>
</clusternodes><BR> <cman expected_votes="1"
two_node="1"><BR> <multicast
addr="239.0.0.1"/><BR>
</cman><BR> <fencedevices><BR>
<fencedevice agent="fence_ilo" hostname="nodeacn"
login="user" name="nodeaiLO" passwd="hp"/><BR>
<fencedevice agent="fence_ilo" hostname="nodebcn"
login="user" name="nodebiLO" passwd="hp"/><BR>
</fencedevices><BR> <rm><BR>
<failoverdomains/><BR>
<resources/><BR>
</rm><BR> <dlm plock_ownership="1"
plock_rate_limit="500"/><BR> <gfs_controld
plock_rate_limit="500"/><BR></cluster></FONT></SMALL><BR><BR>Next
thing to add... I'm going to play a little with the quorum
devices.<BR><BR>Hope it helps!<BR><BR>Alex<BR><BR>On 04/16/2010 05:00 PM,
Jankowski, Chris wrote:
<BLOCKQUOTE
cite=mid:036B68E61A28CA49AC2767596576CD596B7F2669AC@GVW1113EXC.americas.hpqcorp.net
type="cite"><SPAN class=622375914-16042010><FONT face=Arial color=#0000ff
size=2>eparate the cluster
interconne</FONT></SPAN></BLOCKQUOTE></BLOCKQUOTE></BODY></HTML>