Thanks for your reply Fabio. I think the problem may be at our end. Our infrastructure is on Amazon EC2 and it turns out that the interfaces file of a EC2 node does not have reference to its public IP address.<br><br><div class="gmail_quote">
On Mon, Oct 22, 2012 at 1:03 PM, Fabio M. Di Nitto <span dir="ltr"><<a href="mailto:fdinitto@redhat.com" target="_blank">fdinitto@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="im">On 10/17/2012 03:12 PM, Terance Dias wrote:<br>
> Hi,<br>
><br>
> We're trying to create a cluster in which the nodes lie in 2 different<br>
> LANs. Since the nodes lie in different networks, they cannot resolve the<br>
> other node by their internal IP. So in my cluster.conf file, I've<br>
> provided their external IPs. But now when I start CMAN service, I get<br>
> the following error.<br>
><br>
<br>
</div>First of all, we never tested nodes on different LANs, so you might have<br>
issues there that we are not aware of (beside that, latency between<br>
nodes *MUST* be < 2ms).<br>
<br>
As for the IP/name that should work, but I recall fixing something<br>
related not too long ago.<br>
<br>
What version of cman did you install and which distribution/OS?<br>
<br>
Fabio<br>
<div class="im"><br>
> -----------------------------------<br>
><br>
> Starting cluster:<br>
> Checking Network Manager... [ OK ]<br>
> Global setup... [ OK ]<br>
> Loading kernel modules... [ OK ]<br>
> Mounting configfs... [ OK ]<br>
> Starting cman... Cannot find node name in cluster.conf<br>
> Unable to get the configuration<br>
> Cannot find node name in cluster.conf<br>
> cman_tool: corosync daemon didn't start<br>
> [FAILED]<br>
><br>
> -------------------------------------<br>
><br>
> My cluster.conf file is as below<br>
><br>
> -------------------------------------<br>
><br>
> <?xml version="1.0"?><br>
> <!--<br>
> This is an example of a cluster.conf file to run qpidd HA under rgmanager.<br>
><br>
> NOTE: fencing is not shown, you must configure fencing appropriately for<br>
> your cluster.<br>
> --><br>
><br>
> <cluster name="test-cluster" config_version="18"><br>
> <!-- The cluster has 2 nodes. Each has a unique nodid and one vote<br>
> for quorum. --><br>
> <clusternodes><br>
</div>> <clusternode name="/external-ip-1/" nodeid="1"/><br>
> <clusternode name="/external-ip-2/" nodeid="2"/><br>
<div class="im">> </clusternodes><br>
> <cman two_node="1" expected_votes="1" transport="udpu"><br>
> </cman><br>
> <!-- Resouce Manager configuration. --><br>
> <rm><br>
> <!--<br>
> There is a failoverdomain for each node containing just that node.<br>
> This lets us stipulate that the qpidd service should always run<br>
> on each node.<br>
> --><br>
> <failoverdomains><br>
> <failoverdomain name="east-domain" restricted="1"><br>
</div>> <failoverdomainnode name="/external-ip-1/"/><br>
<div class="im">> </failoverdomain><br>
> <failoverdomain name="west-domain" restricted="1"><br>
</div>> <failoverdomainnode name="/external-ip-2/"/><br>
<div class="HOEnZb"><div class="h5">> </failoverdomain><br>
> </failoverdomains><br>
><br>
> <resources><br>
> <!-- This script starts a qpidd broker acting as a backup. --><br>
> <script file="/usr/local/etc/init.d/qpidd" name="qpidd"/><br>
><br>
> <!-- This script promotes the qpidd broker on this node to<br>
> primary. --><br>
> <script file="/usr/local/etc/init.d/qpidd-primary"<br>
> name="qpidd-primary"/><br>
> </resources><br>
><br>
> <!-- There is a qpidd service on each node, it should be restarted<br>
> if it fails. --><br>
> <service name="east-qpidd-service" domain="east-domain"<br>
> recovery="restart"><br>
> <script ref="qpidd"/><br>
> </service><br>
> <service name="west-qpidd-service" domain="west-domain"<br>
> recovery="restart"><br>
> <script ref="qpidd"/><br>
> </service><br>
><br>
> <!-- There should always be a single qpidd-primary service, it can<br>
> run on any node. --><br>
> <service name="qpidd-primary-service" autostart="1" exclusive="0"<br>
> recovery="relocate"><br>
> <script ref="qpidd-primary"/><br>
> </service><br>
> </rm><br>
> </cluster><br>
> ------------------------------------------------<br>
><br>
> Thanks,<br>
> Terance<br>
><br>
><br>
><br>
<br>
</div></div><div class="HOEnZb"><div class="h5">--<br>
Linux-cluster mailing list<br>
<a href="mailto:Linux-cluster@redhat.com">Linux-cluster@redhat.com</a><br>
<a href="https://www.redhat.com/mailman/listinfo/linux-cluster" target="_blank">https://www.redhat.com/mailman/listinfo/linux-cluster</a><br>
</div></div></blockquote></div><br>