[Linux-cluster] 2 node vs 3 node cluster

Lon Hohberger lhh at redhat.com
Wed Aug 25 13:20:02 UTC 2004


On Wed, 2004-08-25 at 18:16 +1000, Nathan Dietsch wrote:

> In other clusters this is handled by allocating a device which both 
> machines have access to. (Each machine has a vote, plus the device has a 
> vote making an odd number of votes).
> When the machines lose sight of each other, they race to grab hold of 
> the device and whoever gets it (using SCSI-3 reservations usually) gets 
> to remain " in the cluster". The other node is "fenced off" from the 
> disks containing the data, usually panics and then reboots, only being 
> allowed back into the cluster once it can communicate with its peers.

Similar to the above is the use of a disk-based membership+quorum model
("it which is writing to the disk is a member and is in the quorum").
This works well in the 2-node case, but doesn't ensure network
connectivity, and isn't terribly scalable.

One can also use a disk-based membership as a backup to network
membership (e.g. membership determined over network; only in the event
of a potential split brain is the disk checked), but again, this
requires that each node be accessing the disk.

Both of the above allow continued concurrent access from all nodes to
shared partitions on a single device - but require allocation of space
on shared devices for the membership/quorum data.  

Another popular method of fixing the split-brain in even-node cases is
adding a dummy vote to a router or something which responds to
ICMP_ECHO ;)

Again, similar to Nathan's example, these models require fencing to
ensure data integrity.

To be precise, "split brain" in data-sharing clusters is typically
equated to "data corruption".

-- Lon




More information about the Linux-cluster mailing list