[Cluster-devel] [DLM PATCH] dlm_controld: handle the case of network transient disconnection

David Teigland teigland at redhat.com
Fri May 13 15:49:13 UTC 2016


On Fri, May 13, 2016 at 01:45:47PM +0800, Eric Ren wrote:
> >the cluster.  Neither option is good.  In the past we decided to let the
> >cluster sit in this state so an admin could choose which nodes to remove.
> >Do you prefer the alternative of kicking nodes in this case (with somewhat
> >unpredictable results)?  If so, we could make that an optional setting,
> >but we'd want to keep the existing behavior for non-even partitions in the
> >example above.
> 
> Gotcha, thanks! But could you please elaborate a bit more on the
> meaning of "with somewhat unpredictable results"? IMHO, you mean
> some inconsistent problems may happen? or just worry about all the
> service would be interrupted because all nodes would be fenced?

If both sides of the merged partition are kicking the other out of the
cluster at the same time, it's hard to predict which nodes will remain
(and it could be none).  To resolve an even partition merge, you need to
remove/restart the nodes on one of the former halves, i.e. either A,B or
C,D.  I never thought of a way to do that automatically in this code
(maybe higher level code would have more options to resolve it.)

> Actually, the reason why I'm working on this problem is because the
> patch for this issue has been merged into pacemaker:
> 
>        [1] https://github.com/ClusterLabs/pacemaker/pull/839
> 
> Personally, I think it's a totally bad fix which makes careful
> thoughts and efforts of dlm_controld in vain, because it make
> pacemaker fence this node whenever dlm resource agent notices
> there's fencing going on by "dlm_tool ls | grep -q "wait fencing"",
> right?
> 
> This fix is for:
>        [2] https://bugzilla.redhat.com/show_bug.cgi?id=1268313

"wait fencing" is normal.  Reading those links, it appears the real issue
was not identified before the patch was applied.

> We've seen unnecessary fence happens in two-node cluster with this
> fix. E.g. both of two nodes will be fenced when we kill corosync on
> one of nodes.

Remove the bad fix and it should work better.

> Thanks for your suggestion. So far, it looks like an optional
> setting is much better than this fix. I'd like to have a try;-)

Two node clusters are a special case of an even partition merge; I'm sure
you've seen the lengthy comment about that.  In a 2|2 partition merge, we
don't kick any nodes from the cluster, as explained above, and it
generally requires manual resolution.

But a 1|1 partition merge can sometimes be resolved automatically.  Quorum
can be disabled in a two node cluster, and the fencing system allowed to
race between two partitioned nodes to select a survivor (there are caveats
with that.)  The area of code you've been looking at (with the long
comment) uses the result of the fencing race to resolve the possible
partition merge by kicking out the node that was fenced.

Dave




More information about the Cluster-devel mailing list