[Linux-cluster] [Q] Good documentation about command line interface??

Digimer linux at alteeve.com
Sun May 29 16:00:57 UTC 2011


On 05/29/2011 06:14 AM, Hiroyuki Sato wrote:
> Hello Digimer.
>
> Thank you for your information.
>
> This is the document that I'm looking for!!.
> This doc is very very usuful. Thanks!!.

Wonderful, I'm glad you find it useful. :)

> I want to ask one thing.
>
> Please take a look my cluster configration again.

Will do, comments will be in-line.

> Mainly I want to use GNBD on gfs_clientX.
> GNBD server is gfs2, and gfs3.
>
> And gfs_client's hardwhere does not support IPMI, iLO...,
> Because That machine is Desktop computers.
>
> And no APC like UPS.
>
> The desktop machine is just support Wake On LAN.
>
> What fence device should I use??
> I'm thinking fence_wake_on_lan is proper fence device.
> but that is nothing..

The least expensive option for a commercial product would be APC's 
switched PDU. You have 13 machines, so you would need either 2 of the 1U 
models, or 1 of the 0U models.

If you are in North America, you can use these:

http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7900

or

http://www.apc.com/products/resource/include/techspec_index.cfm?base_sku=AP7931

If you are in Japan, you'll need to select the best one of these:

http://www.apc.com/products/family/index.cfm?id=70&ISOCountryCode=JP

Whichever you get, you can use the 'fence_apc' fence agent.

> Thank you for your advice.
>
> <?xml version="1.0"?>
> <cluster name="arch_gfs1" config_version="21">
>    <clusternodes>
>      <clusternode name="gfs1.archsystem.com" votes="1" nodeid="5">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs1.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs2.archsystem.com" votes="1" nodeid="6">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs2.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs3.archsystem.com" votes="1" nodeid="7">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs3.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client1.archsystem.com" votes="1" nodeid="21">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client1.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client2.archsystem.com" votes="1" nodeid="22">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client2.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client3.archsystem.com" votes="1" nodeid="23">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client3.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client4.archsystem.com" votes="1" nodeid="24">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client4.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client5.archsystem.com" votes="1" nodeid="25">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client5.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client6.archsystem.com" votes="1" nodeid="26">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client6.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client7.archsystem.com" votes="1" nodeid="27">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client7.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client8.archsystem.com" votes="1" nodeid="28">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client8.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client9.archsystem.com" votes="1" nodeid="29">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client9.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>      <clusternode name="gfs_client10.archsystem.com" votes="1" nodeid="30">
>        <fence>
>          <method name="single">
>            <device name="manual" nodename="gfs_client10.archsystem.com"/>
>          </method>
>        </fence>
>      </clusternode>
>    </clusternodes>
>    <fencedevices>
>      <fencedevice name="manual" agent="fence_manual"/>
>    </fencedevices>
>    <rm>
>      <failoverdomains/>
>      <resources/>
>    </rm>
> </cluster>
>
> Regards.

Outside of the "fence_manual" issue, this looks fine. You will probably 
want to get the GFS and GNBD stuff into rgmanager, but that can come 
later after you have fencing working and the core of the cluster tested 
and working.

Take a look at this:

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_Network_Block_Device/s1-gnbd-mp-sn.html

It discusses fencing with GNBD. Below is the start of the Red Hat 
document on GNBD in EL5 that you may find helpful, if you haven't read 
it already.

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Global_Network_Block_Device/ch-gnbd.html

Let me know if you want/need any more help. I'll be happy to see what I 
can do.

-- 
Digimer
E-Mail:              digimer at alteeve.com
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"I feel confined, only free to expand myself within boundaries."




More information about the Linux-cluster mailing list