[Linux-cluster] ccs_config_validate in cluster 3.0.X

Gianluca Cecchi gianluca.cecchi at gmail.com
Wed Oct 28 12:00:34 UTC 2009


On Wed, 28 Oct 2009 11:36:30 +0100  Fabio M. Di Nitto wrote:

> Hi everybody,
>
> as briefly mentioned in 3.0.4 release note, a new system to validate the
> configuration has been enabled in the code.

Hello,
updated my F11 today from cman-3.0.3-1.fc11.x86_64 to
cman-3.0.4-1.fc11.x86_64

I noticed the messages you referred. See the attached image.
But going to do a "man fenced" it seems my syntax is correct... or could I
change anything?

Here an excerpt of my cluster.conf
        <clusternodes>
                <clusternode name="kvm1" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ilokvm1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="kvm2" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ilokvm2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <fencedevices>
                <fencedevice agent="fence_ilo" hostname="myip1"
login="fenceuser" name="ilokvm1" passwd="mypwd1"/>
                <fencedevice agent="fence_ilo" hostname="myip2"
login="fenceuser" name="ilokvm2" passwd="mypwd2"/>
        </fencedevices>

The cluster then starts without any problem btw.
The other node is currently powered off
 [root at virtfed ~]# clustat
Cluster Status for kvm @ Wed Oct 28 12:41:43 2009
Member Status: Quorate

 Member Name                                                   ID   Status
 ------ ----                                                   ---- ------
 kvm1                                                              1 Online,
Local, rgmanager
 kvm2                                                              2 Offline

 Service Name                                         Owner
(Last)                                         State
 ------- ----                                         -----
------                                         -----
 service:DRBDNODE1
kvm1                                                 started
 service:DRBDNODE2
(none)                                               stopped

Two questions:
1) now with cluster 3.x what is the correct way to update the config and
propagate? ccs_tool command + cman_tool version or what?
2) I continue to have modclusterd and ricci services that die... how can I
debug? Are they necessary at all in cluster 3.x?

Thanks,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20091028/5cafca05/attachment.htm>


More information about the Linux-cluster mailing list