[Linux-cluster] Quorum Disk on 2 nodes out of 4?

Karl Podesta kpodesta at redbrick.dcu.ie
Wed Nov 18 11:08:42 UTC 2009


On Wed, Nov 18, 2009 at 06:32:25AM +0100, Fabio M. Di Nitto wrote:
> > Apologies if a similar question has been asked in the past, any inputs, 
> > thoughts, or pointers welcome. 
> 
> Ideally you would find a way to plug the storage into the 2 nodes that
> do not have it now, and then run qdisk on top.
> 
> At that point you can also benefit from "global" failover of the
> applications across all the nodes.
> 
> Fabio

Thanks for the reply and pointers, indeed the 4 nodes attached to storage
with qdisk sounds best... I believe in the particular scenario above, 
2 of the nodes don't have any HBA cards / attachment to storage. Maybe
an IP tiebreaker would have to be introduced if storage connections could
not be obtained and the cluster was to split into two. 

I wonder how common that type of quorum disk setup would be these days, 
I gather most would use GFS in this scenario with 4 nodes, eliminating 
the need for any specific failover of an ext3 disk mount etc., and merely
failing over the services accross all cluster nodes instead.  

Karl

--
Karl Podesta
 




More information about the Linux-cluster mailing list