[Linux-cluster] Redhat without qdisk

Digimer lists at alteeve.ca
Thu Apr 12 16:18:42 UTC 2012

On 04/12/2012 11:51 AM, Ryan O'Hara wrote:
> On 04/12/2012 10:18 AM, emmanuel segura wrote:
>> That's right
>> you'll found your cluster partitioned and if you "<cman two_node="1"
>> expected_votes="1">" as redhat setting our cluster maybe you get data
>> corruption
> How? What fence agent are you using? I've used this configuration for
> years and never had data corruption.

Same. I build almost exclusively two-node clusters without qdisk and
they are perfectly stable. The trick is to have your fencing setup well
and tested. Personally, I recommend IPMI/iLO/DRAC/RSA as the first fence
device with a switched PDU as a backup. The 'delay' option will help
ensure only one node dies in a split condition.

>> Because every node can operate with one vote an rich the quorum state
>> Forse the fencing problem redhat implement a work around as permanent
>> solution
>> fence delay for some cluster agents
>> For fence_scsi in Redhat 5.X the redhat support says it's ok for
>> production
>> BAAAAA not tre
> What are you concerns about fence_scsi?
>> AND i the redhat technical tells the cluster don't require the quorum
>> disk
> Can you explain why qdisk would be required?
> Ryan

It's true that a qdisk helps a bit, and it is true that without it
quorum is effectively disabled. However, services will never start on
both nodes so long as you've configured the cluster to never make
assumptions about the other node and that you have proper fencing in place.

The reason is that, when a partition occurs, both nodes will trigger
fenced. In turn, fenced informs DLM which stops providing locks. This
effectively blocks the cluster as gfs2, clvmd and rgmanager all use
locks. Once one of the fence methods succeeds, DLM is again informed and
it resumes providing locks. Of course, at this point, the other node is
dead so the remaining node can be confident it is the only one providing

Hope that helps expand on what Ryan and the others are saying.

Papers and Projects: https://alteeve.com

More information about the Linux-cluster mailing list