[Linux-cluster] [Q] Good documentation about command line interface??

Digimer linux at alteeve.com
Sat May 28 17:31:12 UTC 2011


On 05/28/2011 08:39 AM, Hiroyuki Sato wrote:
> Dear members.
>
> I'm newbie Red Hat cluster.

Welcome!

> Could you point me to good documentation about command line interface??
 >   ( cman_tool, css_tool, ccs_test, fence_ack_manual ..)

The man pages for these tools are well documented.

>   fence_ack_manual

<Digimer rolls up her sleeves and grabs her you-need-a-real-fence-device 
bat>

This is not supported in any way, shape or form. You *must* use a proper 
fence device. Do your servers have IPMI (or OEM version like DRAC, iLO, 
etc?).

Please read this:

http://wiki.alteeve.com/index.php/Red_Hat_Cluster_Service_2_Tutorial#Concept.3B_Virtual_Synchrony

Specifically; "Concept; Virtual Synchrony" and "Concept; Fencing"

> Especially the following topics.
>
>    * How to rejoin to node.
>    * How to leave from node.

Starting and stopping the cman service will cause the node to join and 
leave, respectively. You can do it manually if you wish, please check 
the man pages.

>    * How to use fence_ack_manual

Again, you can't. It is not supported.

>    * How to manage cluster with command line tools.

ccs_tool is the main program to look at.

> One of my problem is here.
>
> The status of gfs3 which in my test cluster is JOIN_STOP_WAIT.
> I don't know how to re-join it.
>
> # /usr/sbin/cman_tool services
> type             level name     id       state
> fence            0     default  00000000 JOIN_STOP_WAIT

Without a working fence device, the cluster will block forever. As far 
as I know, once a fence call has been issued, there is nothing that can 
be done to abort it. I'd suggest pulling the power on the node, boot it 
cleanly and start cman.

> I found a keyword 'fenced_override'. This file. should be named pipe.
> Howevre I can't find that file in /var/run/cluter directory in my clusters.
> fenced working on all of clusters.

Again, it's not supported.

> Sincerely.
>
>
> * Environment
>
>    CentOS 5.6
>
>
> * Configurations
>
> <?xml version="1.0"?>
> <cluster name="arch_gfs1" config_version="21">
>    <cman expected_votes="1">

This is wrong, 'expected_votes' is the number of nodes in the cluster 
(plus qdisk votes, if you are using it).

>    </cman>
>    <clusternodes>
>      <clusternode name="gfs1.doma.in" votes="1" nodeid="5">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs1.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs2.doma.in" votes="1" nodeid="6">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs2.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs3.doma.in" votes="1" nodeid="7">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs3.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client1.doma.in" votes="1" nodeid="21">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client1.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client2.doma.in" votes="1" nodeid="22">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client2.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client3.doma.in" votes="1" nodeid="23">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client3.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client4.doma.in" votes="1" nodeid="24">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client4.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client5.doma.in" votes="1" nodeid="25">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client5.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client6.doma.in" votes="1" nodeid="26">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client6.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client7.doma.in" votes="1" nodeid="27">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client7.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client8.doma.in" votes="1" nodeid="28">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client8.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client9.doma.in" votes="1" nodeid="29">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client9.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client10.doma.in" votes="1" nodeid="30">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client10.doma.in"/>
>          </method>
>        </fence>
>      </clusternode>
>    </clusternodes>
>    <fencedevices>
>      <fencedevice name="manual" agent="fence_manual"/>
>    </fencedevices>
>    <rm>
>      <failoverdomains/>
>      <resources/>
>    </rm>
> </cluster>

If you are on IRC, join #linux-cluster, it is also a great place to get 
help. I am usually there and will be happy to help you get a) fencing 
working and b) get the rest working.

Welcome to clustering! :)

-- 
Digimer
E-Mail: digimer at alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin:  http://nodeassassin.org
"I feel confined, only free to expand myself within boundaries."




More information about the Linux-cluster mailing list