[Linux-cluster] Two-node cluster: Node attempts stateful merge after clean reboot
emmanuel segura
emi2fast at gmail.com
Wed Sep 11 23:14:46 UTC 2013
The solution is don't start the cluster in the boot time? :) Nice
Redhat support expected_votes="1" but doesn't support clean_start=1? :) Nice
I would like to have more details when redhat doesn't support one cluster
option with e technical example, no only Redhat doesn't support
2013/9/12 Thom Gardner <tmg at redhat.com>
> On Thu, Sep 12, 2013 at 12:21:28AM +0200, emmanuel segura wrote:
> > clean_start=1 disable the startup fencing and if you use a quorum disk
> in your
> > cluster without expected_votes=1, when a node start after it has been
> fenced,
> > the node dosn't try to fence di remain node and try to start the service,
> > because rgmanager need a cluster quorate, so many people around say
> clean_start
> > =1 is dangerous, but no one give a clear reason,
>
> Listen to Digimer. clean_start=1 is dangerous. It will allow a node
> to join a cluster "with state" and thus opens the door to split-brain.
> We (Red Hat) will not support a production cluster with clean_start
> turned on.
>
> > in my production
> cluster a i
> > have clvm+vg in exclusive mode+(clean_start=1)+(master_wins). so if you
> can
> > explain me where is the problem :) i apriciate
> >
>
> It specifically targets a safety mechanism, and basically turns
> off the check for a node trying to join the cluster "with state",
> so it will gladly allow you to split-brain a FS or something really
> ugly like that. It's there for testing/debugging purposes only,
> and should never be used on a production system.
>
> As for leaving services turned off, Digimer is spot on with that one,
> too (that was you, wasn't it?). It is one of the ways we recommend
> getting around this fence loop problem. It's also the simplest one
> and, IMHO, the one with the most tolerable list of potential side
> effects, which again Digimer did a fine job of explaining (basically
> you have to start your cluster services manually, but if you have a
> fence event, you're going to probably be fixing something anyway on
> that machine, so, it's probably good that they're not coming up on
> their own, and it's no big thing to start things up when you're done).
>
> L8r,
> tg.
>
> > 2013/9/11 Digimer <lists at alteeve.ca>
> >
> > On 11/09/13 12:04, emmanuel segura wrote:
> >
> > Hello Pascal
> >
> > For disable startup fencing you need clean_start=1 in the
> fence_daemon
> > tag, i saw in your previous mail you are using
> expected_votes="1", with
> > this setting every cluster node will be partitioned into two
> clusters
> > and operate independently, i recommended using a quorim disk with
> > master_wins parameter
> >
> >
> > This is a very bad idea and is asking for a split-brain, the main
> reason
> > fencing exists at all.
> >
> >
> > --
> > Digimer
> > Papers and Projects: https://alteeve.ca/w/
> > What if the cure for cancer is trapped in the mind of a person
> without
> > access to education?
> >
> >
> >
> >
> > --
> > esta es mi vida e me la vivo hasta que dios quiera
>
> > --
> > Linux-cluster mailing list
> > Linux-cluster at redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
--
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20130912/48561341/attachment.htm>
More information about the Linux-cluster
mailing list