On Wed, 28 Oct 2009 11:36:30 +0100  Fabio M. Di Nitto wrote:<br><pre>> Hi everybody,<br>> <br>> as briefly mentioned in 3.0.4 release note, a new system to validate the<br>> configuration has been enabled in the code.<br>
</pre>Hello,<br>updated my F11 today from cman-3.0.3-1.fc11.x86_64 to cman-3.0.4-1.fc11.x86_64<br><br>I noticed the messages you referred. See the attached image.<br>But going to do a "man fenced" it seems my syntax is correct... or could I change anything?<br>
<br>Here an excerpt of my cluster.conf<br>        <clusternodes><br>                <clusternode name="kvm1" nodeid="1" votes="1"><br>                        <fence><br>                                <method name="1"><br>
                                        <device name="ilokvm1"/><br>                                </method><br>                        </fence><br>                </clusternode><br>                <clusternode name="kvm2" nodeid="2" votes="1"><br>
                        <fence><br>                                <method name="1"><br>                                        <device name="ilokvm2"/><br>                                </method><br>
                        </fence><br>                </clusternode><br>        </clusternodes><br>        <fencedevices><br>                <fencedevice agent="fence_ilo" hostname="myip1" login="fenceuser" name="ilokvm1" passwd="mypwd1"/><br>
                <fencedevice agent="fence_ilo" hostname="myip2" login="fenceuser" name="ilokvm2" passwd="mypwd2"/><br>        </fencedevices><br><br>The cluster then starts without any problem btw.<br>
The other node is currently powered off<br> [root@virtfed ~]# clustat<br>Cluster Status for kvm @ Wed Oct 28 12:41:43 2009<br>Member Status: Quorate<br><br> Member Name                                                   ID   Status<br>
 ------ ----                                                   ---- ------<br> kvm1                                                              1 Online, Local, rgmanager<br> kvm2                                                              2 Offline<br>
<br> Service Name                                         Owner (Last)                                         State         <br> ------- ----                                         ----- ------                                         -----         <br>
 service:DRBDNODE1                                    kvm1                                                 started       <br> service:DRBDNODE2                                    (none)                                               stopped       <br>
<br>Two questions:<br>1) now with cluster 3.x what is the correct way to update the config and propagate? ccs_tool command + cman_tool version or what?<br>2) I continue to have modclusterd and ricci services that die... how can I debug? Are they necessary at all in cluster 3.x?<br>
<br>Thanks,<br>Gianluca<br>