[Linux-cluster] GFS+DRBD+Quorum: Help wrap my brain around this

Colin Simpson Colin.Simpson at iongeo.com
Tue Nov 23 12:28:41 UTC 2010


On Tue, 2010-11-23 at 11:09 +0000, Kaloyan Kovachev wrote:
> Hi,
>  just my 0.02 below
> 
> On Mon, 22 Nov 2010 21:21:50 +0000 (UTC), "A. Gideon"
> <ag8817282 at gideon.org> wrote:

> 
> As an additional step you may set fencing (in drbd.conf) to
> resource-and-stonith and edit your outdate-peer DRBD script to issue
> fence_node and return exit status 7 as last resort action (if the
> other
> node can't be reached) - this will also protect you from the case when
> just
> the communication between the DRBD machines is lost
> 
> >
> >> For me a couple of minutes waiting
> >> for the other node is sufficient if it was degraded already, maybe
> a
> bit
> >> longer if the DRBD was sync'd before they went down.
> >
> > I'm afraid I'm not clear what you mean by this.  Isn't the fact that
> each
> > node cannot know the state of the other the problem?  So how can
> wait
> > times be varied as you describe?
> >
> >
> >> I can send you config's I believe are correct from the Linbit docs
> of
> >> using DRBD Primary/Primary with GFS, if you like.
> >
> > Something more than http://www.drbd.org/users-guide/s-gfs-create-
> > resource.html ?  That would be welcome.
> >
> >>
> >> But I'm told (from a thread I posted at DRBD) that this should
> always
> >> work.
> >
> > This is something I'm realizing: that I need to ask some of my
> questions
> 
> > on that list rather than here, since my questions right now are more
> down
> > at that layer.
> >
> > Thanks...

I don't see why you need fencing at all in the drbd, you can (I believe)
should do all the fencing just in cluster suite. Setting in drbd.conf:


  net {
   allow-two-primaries;
   after-sb-0pri discard-zero-changes;
   after-sb-1pri discard-secondary;
   after-sb-2pri disconnect;
  }

Should always use the newest data set for all access in an out of sync
drbd, and resync with the newest data. 

So I'm led to believe, I'm still testing but seems to do the correct
thing in my test set-up. And seemed to be the answer I got on the DRBD
mailing list the thread was called "Best Practice with DRBD RHCS and
GFS2"

Just tell cluster quite that a single node can be quorum
	<cman expected_votes="1" two_node="1"/>

Since the third node is ignorant about the status of DRBD, I don't
really see what help it gives for it to decide on quorum. 

Colin

This email and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom they are addressed.  If you are not the original recipient or the person responsible for delivering the email to the intended recipient, be advised that you have received this email in error, and that any use, dissemination, forwarding, printing, or copying of this email is strictly prohibited. If you received this email in error, please immediately notify the sender and delete the original.






More information about the Linux-cluster mailing list