[Linux-cluster] Cluster with shared storage on low budget

Fajar A. Nugraha work at fajar.net
Tue Feb 15 10:08:45 UTC 2011

On Tue, Feb 15, 2011 at 4:57 PM, Gordan Bobic <gordan at bobich.net> wrote:
> Nikola Savic wrote:
>>  If I understand you well, even before sync is completely done DRBD
>> will take care of reading and writing of dirty blocks on problematic
>> node that got back online? Let's say that node was down for longer time
>> and that synchronization can take few minutes, maybe more. If all
>> services start working before sync is complete, it can happen that web
>> applications tries to write into or read from dirty block(s). Will DRBD
>> take care of that? If not, is there way to suspend startup of services
>> (web server and similar) until sync is done?
> DRBD and GFS will take care of that for you. DRBD directs reads to nodes
> that are up to date until everything is in sync.

Really? Can you point to a documentation that said so?
IIRC the block device /dev/drbd* on a node will not be accessible for
read/write until it's synced.

> Make sure that in drbd.conf you put in a stonith parameter pointing at your
> fencing agent with suitable parameters, and set the timeout to slightly less
> than what you have it set in cluster.conf. That will ensure that you are
> protected from the race condition where DRBD might drop out but the node
> starts heartbeating between then and when the fencing timeout occurs.
> Oh, and if you are going to use DRBD there is no reason to use LVM.

There are two ways to use DRBD with LVM in a cluster:
(1) Use drbd on partition/disk, and use CLVM on top of that
(2) create local LVM, and use drbd on top of the LVs

Personally I prefer (2), since this setup allows LVM snapshots, and
faster to resync if I want to reinitialize a drbd device on one of the
nodes (like when a split brain occurred, which was often on my
fencingless-test-setup a while back).


More information about the Linux-cluster mailing list