[Cluster-devel] fencing conditions: what should trigger a fencing operation?

Fabio M. Di Nitto fdinitto at redhat.com
Thu Nov 19 18:10:54 UTC 2009


David Teigland wrote:
> On Thu, Nov 19, 2009 at 12:35:05PM +0100, Fabio M. Di Nitto wrote:
> 
>> - what are the current fencing policies?
> 
> node failure
> 
>> - what can we do to improve them?
> 
> node failure is a simple, black and white, fact
> 
>> - should we monitor for more failures than we do now?
> 
> corosync *exists* to to detect node failure
> 
>> It is a known issue that node1 will crash at some point (kernel OOPS).
> 
> oops is not necessarily node failure; if you *want* it to be, then you
> sysctl -w kernel.panic_on_oops=1
> 
> (gfs has also had it's own mount options over the years to force this
> behavior, even if the sysctl isn't set properly; it's a common issue.
> It seems panic_on_oops has had inconsistent default values over various
> releases, sometimes 0, sometimes 1; setting it has historically been part
> of cluster/gfs documentation since most customers want it to be 1.)

So a cluster can hang because our code failed, but we don´t detect that
it did fail.... so what determines a node failure? only when corosync dies?

panic_on_oops is not cluster specific and not all OOPS are panic == not
a clean solution.

Fabio





More information about the Cluster-devel mailing list