[Linux-cluster] Cluster with shared storage on low budget

Gordan Bobic gordan at bobich.net
Tue Feb 15 21:01:21 UTC 2011


On 02/15/2011 06:19 PM, yvette hirth wrote:
> Thomas Sjolshagen wrote:
>
>> Just so you realize; If you intend to use clvm (i.e. lvme in a cluster
>> where you expect to be able to write to the volume from more than one
>> node at/around the same time w/o a full-on failover), you will _not_
>> have snapshot support. And no, this isn't "not supported" as in
>> "nobody to call if you encounter a problem", it's "not supported" as
>> in "the tools will not let you create a snapshot of the LV".
>
> i've been listening to this discussion with much interest, as we would
> like to improve the currency of our backup files.
>
> right now we have an ensemble of GFS2 LV's ("pri") as our primary data
> store, and a "matching ensemble" of XFS LV's ("bak") as our backup data
> store. an hourly cron job rsync's all LV's in the ensemble from pri =>
> bak. it's incredibly reliable, but this reduces our mean backup currency
> by 1/2 hour. one upside is that i've got snapshots that are only 1/2
> hour old, and are daily backed up to tape.
>
> the conversation seems to indicate that we can change the bak LV's from
> XFS to GFS2 and have drbd auto-sync the pri LV changes made to the bak
> LV's - yes? this would reduce our backup currency from a mean of 1/2
> hour to theoretically, "atomic" (more likely "mere seconds"). i assUme
> we have to change from XFS to GFS2, as drbd doesn't appear to do file
> system conversions...
>
> if our assumptions are correct, are there any guides / manuals / doc on
> how to do this? it's most tempting to try, since if it doesn't work, the
> hourly cron rsync's could be simply reinstated.

I'm not sure you realize what this would require. DRBD is a block 
device. You would have to start with a new partition/disk, "format" it 
for DRBD (creates DRBD metadata on the block device), then create GFS on 
top of it and put your files in. It's a backup+restore job to migrate to 
and from it.

If you were to do this, your backup node would have to be a part of your 
DRBD cluster (all nodes need to share the DRBD device, unless you plan 
to only use it on the SAN that all the nodes connect to the volume 
from). You would then drop the backup node out of the cluster completely 
and make sure it cannot reconnect (this is vitally important), mount the 
GFS FS from DRBD ro with lock_nolock, and then back that up. Unless you 
are happy with just a block level mirror, which won't help you if data 
is accidentally deleted. DRBD is network RAID1, and RAID (of any level) 
is not a replacement for backups - but I'm sure you know that.

Gordan




More information about the Linux-cluster mailing list