[Linux-cluster] cluster failed to start

Heiko Nardmann heiko.nardmann at itechnical.de
Thu Sep 13 09:28:26 UTC 2012


Hi!

Those machines should have some ALOM/ILOM/whatever mechanism which you 
could use for power fencing e.g. ...

You should check whether a corresponding fence agent exists then.


Kind regards,

     Heiko

Am 13.09.2012 10:11, schrieb Ben .T.George:
> HI
>
> thanks for your reply..But on this test setup how can i configure 
> fencing..i am running 2 Sun X4200 machine.and the freenas is used to 
> create Iscsi.
>
> my current NFS setup is working perfectly..and in the production setup 
> , i need to implement this on cisco UCS. i saw cisco UCS is there 
> under fencing options
>
> please help me add three more nfs shares to my existing configuration..
>
>
> regards,
> ben
>
> On Thu, Sep 13, 2012 at 9:53 AM, digimer <lists at alteeve.ca 
> <mailto:lists at alteeve.ca>> wrote:
>
>     Please add fencing. Without it, the first time a node fails, your
>     cluster will hang (by design). Most servers have IPMI (or
>     similar), so you can probably use fence_ipmilan or one of the
>     brand-specific agents like fence_ilo for HP's iLO.
>
>
>     On 09/13/2012 02:44 AM, Ben .T.George wrote:
>
>         Hi
>
>         i manually created a cluster.conf file and copied to my 2
>         nodes..now
>         it's working fine with one one NFS HA sare.i need to add three
>         more shares..
>
>         please check this http://pastebin.com/eM08vrC5  this is my
>         cluster.conf
>
>         how can i add three more shares to this cluster.conf file.?
>
>         please help..i got stacked with project.after testing this
>         setup i need
>         to implement on production
>
>
>         Regards,
>         Ben
>
>
>
>         On Wed, Sep 12, 2012 at 10:35 PM, Jan Pokorný
>         <jpokorny at redhat.com <mailto:jpokorny at redhat.com>
>         <mailto:jpokorny at redhat.com <mailto:jpokorny at redhat.com>>> wrote:
>
>             Hello Ben,
>
>             On 12/09/12 16:39 +0300, Ben .T.George wrote:
>              > i created 2 node cluster with RHEL6 by using redhat
>         cluster suite.
>              >
>              > i joined cluster nodes by using LUCI
>              >
>              > i created one IP as resource and source as that IP i
>         created.
>              >
>              > i started cluster , but on luci status showing like
>         disabled.but
>             that IP is
>              > pining and it is added on node2
>              >
>              > ip addr is showing that ip.
>              >
>              > #clustat  is showing both nodes online.
>
>             actually thanks for bringing up what showed up to be a
>         real issue [1].
>             Could you please try "service modclusterd start" across
>         the nodes
>             (and perhaps
>             making the service persistent with chkconfig) to see if it
>         helps you?
>
>             In the mean time, this should be a workaround in such cases;
>             more decent solution for this bug is underway.
>
>             [1] https://bugzilla.redhat.com/show_bug.cgi?id=856785
>
>             Thanks,
>             Jan
>
>             --
>             Linux-cluster mailing list
>         Linux-cluster at redhat.com <mailto:Linux-cluster at redhat.com>
>         <mailto:Linux-cluster at redhat.com
>         <mailto:Linux-cluster at redhat.com>>
>
>         https://www.redhat.com/mailman/listinfo/linux-cluster
>
>
>
>
>         --
>         Yours Sincerely
>
>         *#!/usr/bin/env python
>         #Mysignature.py :)*
>
>
>         Signature = " " " Ben.T.George \n
>                            Linux System Administrator \n
>                            Diyar United Company \n
>                            kuwait \n
>                            Phone : +965 - 50629829
>         <tel:%2B965%20-%2050629829> \n " ""
>
>         Print Signature
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20120913/fa2c2060/attachment.htm>


More information about the Linux-cluster mailing list