[Linux-cluster] GFS+DRBD+Quorum: Help wrap my brain around this
Andrew Gideon
ag8817282 at gideon.org
Fri Nov 19 20:22:20 UTC 2010
On Fri, 2010-11-19 at 12:48 -0500, Andrew Gideon wrote:
> On Wed, 2010-11-17 at 21:02 +0000, Colin Simpson wrote:
> > On a two node non-shared storage setup you can never fully guard against
> > the scenario of node A being shutdown, node B then being shutdown later.
> > Then node A being brought up and having no way of knowing that it has
> > the older data than B, if B is still down.
>
> I was under the impression that this was solved by adding in the quorum
> disk. Is that not correct?
>
> [...]
>
> > Three nodes just adds needless complexity from what you are saying.
>
> I thought that a third node could be acting as a "quorum server". If A
> can still reach that third node (C), then A and C have quorum. The same
> is true if one replaced A with B. If A and B retain contact with each
> other, but lose touch with C, quorum still exists.
I've given this a little more thought. I'm not sure if I'm thinking in
the proper direction, though.
If cluster quorum is preserved despite A and B being partitioned, then
one of A or B will be fenced (either cluster fencing or DRBD fencing).
This would be true whether quorum is maintained with a third node or a
quorum disk.
More, to avoid the problem described a couple of messages back (A fails,
B fails, A returns w/o knowing that B has later data), the fact that B
continued w/o A needs to be stored somewhere. This can be done either
on a quorum disk or via a third node. Either way, the fencing logic
would make a note of this. For example, if A were fenced then that bit
of extra storage (quorum disk or third node) would reflect that B had
continued w/o A and that B therefore had the latest copy of the data.
When A or B returns to service, it would need to check that additional
storage. If a node determines that its peer has the later data, it can
invoke "drbdadm outdate" on itself.
Doesn't this seem reasonable? Or am I misthinking it somehow?
- Andrew
More information about the Linux-cluster
mailing list