[Linux-cluster] Howto define two-node cluster in enterpriseenvironment

Kit Gerrits kitgerrits at gmail.com
Wed Jan 12 09:43:15 UTC 2011


FYI:

http://magazine.redhat.com/2007/12/19/enhancing-cluster-quorum-with-qdisk/
QDisk uses a small 10MB disk partition shared across the cluster. Qdiskd
runs on each node in the cluster, periodically evaluating its own health and
then placing its state information into an assigned portion of the shared
disk area. Each qdiskd then looks at the state of the other nodes in the
cluster as posted in their area of the QDisk partition. When in a healthy
state, the quorum of the cluster adds the count for each node plus the value
of the QDisk partition. In the example above, the total quorum count is 7;
one for each node and 3 for the QDisk partition.

If, on a particular node, QDisk is unable to access its shared disk area
after several attempts, then the qdiskd running on another node in the
cluster will request that the troubled node be fenced. This will reset that
machine and get it into a operational state.


http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Cluster_Ad
ministration/s1-qdisk-considerations-CA.html
A quorum disk device should be a shared block device with concurrent
read/write access by all nodes in a cluster. The minimum size of the block
device is 10 Megabytes. Examples of shared block devices that can be used by
qdiskd are a multi-port SCSI RAID array, a Fibre Channel RAID SAN, or a
RAID-configured iSCSI target. You can create a quorum disk device with
mkqdisk, the Cluster Quorum Disk Utility. For information about using the
utility refer to the mkqdisk(8) man page. 


A/
The Quorum disk is a raw device, NOT a filesystem, which all servers use
together.
It is meant to be written to from multiple servers at the same time.
(really!)
There is no filesystem on there to kill.
Should the Quorum disk get corrupted, I assume the Qdisk Service will let
you know.

AFAIK, all nodes update a little chunk of the Qdisk to let the others know
they are still alive.
Upon a "split brain" cluster failure, the first node to write to the Qdisk
gets an extra vote.


B/
If you want to share a REAL filesystem LUN between cluster nodes, you can
set is up as a Cluster LVM.
Clustering will then guarantee that only one server can mount it at a time.
Should clustering fail for some reason, the LVM will be disabled.
Should the machine running the LVM volume crash, it will be fenced
(rebooted) and the LVM disabled.
Once the machine has been successfully fenced, the LVM will be re-enabled,
so other cluster members can mount volumes on there.


Regards,

Kit


-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Thomas Sjolshagen
Sent: maandag 10 januari 2011 21:38
To: linux clustering
Subject: Re: [Linux-cluster] Howto define two-node cluster in
enterpriseenvironment

 On Mon, 10 Jan 2011 21:21:45 +0100, "Kit Gerrits" 
 <kitgerrits at gmail.com> wrote:
> Hello fellow administrator,
>
> If you have a SAN...
> Why can't you have the SAN publish the same LUN to the two cluster 
> nodes simultaneously?

 You can, but you minimally need to guarantee (not believe or think, but
 guarantee!) that both nodes do not

 a) write to the same sectors, file systems or LVM volumes at the same  time
(this is actually a whole lot more difficult to do than most people
 think) - including boot sectors, partition tables, LVM metadata, etc,  etc,

 b) think they're exclusively accessing the LUN I.e. there must be
something on the nodes - an application, OS tool or something else -  that
understands that there is more than one reader & writer to a LUN  and thus
synchronizes this.

> It is only used as a raw device, so there should be no ugly filesystem 
> side-effects.

 File systems only serve to make this a lot more obvious to the end user  or
administrator since it's integrity tends to get shot fairly quickly  and
there are integrity checks in place. On raw devices, you get the  "benefit"
of ignorance about the fact that your data is corrupt, unless
 b) above is true.

 Hth,

 // Thomas

>
>
> Regards,
>
> Kit
>
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com 
> [mailto:linux-cluster-bounces at redhat.com] On Behalf Of Andreas 
> Bleischwitz
> Sent: maandag 10 januari 2011 11:25
> To: linux-cluster at redhat.com
> Subject: [Linux-cluster] Howto define two-node cluster in 
> enterpriseenvironment
>
> Hello list,
>
> I recently ran into some questions regarding a two-node cluster in an 
> enterprise environment, where single-point-of-failures were tried to 
> be eliminated whenever possible.
>
> The situation is the following:
> Two-node cluster, SAN-based shared storage - multipathed; host-based 
> mirrored, bonded NICS, Quorum device as tie-breaker.
>
> Problem:
> The quorum device is the single-point-of-failure as the SAN-device 
> could fail and hence the quorum-disc wouldn't be accessible.
> The quorum-disc can't be host-based mirrored, as this would require 
> cmirror
> - which depends on a quorate cluster.
> One solution: use storage-based mirroring - with extra costs, limited 
> to no support with mixed storage vendors.
> Another solution: Use a third - no service - node which has to have 
> the same SAN-connections as the other two nodes out of cluster 
> reasons. This node will idle most of the time and therefore be very 
> uneconomic.
>
> How are such situations usually solved using RHCS? There must be a way 
> of configuring a two-nodecluster without having a SPOF defined.
>
> HP had a quorum-host with their no longer maintained Service Guard, 
> which could do quorum for more than on cluster at once.
>
> Any suggestions appreciated.
>
> Best regrads,
>
> Andreas Bleischwitz
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster at redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster




More information about the Linux-cluster mailing list