[Linux-cluster] Cluster with shared storage on low budget
gordan at bobich.net
Tue Feb 15 12:04:44 UTC 2011
Nikola Savic wrote:
> Gordan Bobic wrote:
>> DRBD and GFS will take care of that for you. DRBD directs reads to
>> nodes that are up to date until everything is in sync.
>> Make sure that in drbd.conf you put in a stonith parameter pointing at
>> your fencing agent with suitable parameters, and set the timeout to
>> slightly less than what you have it set in cluster.conf. That will
>> ensure that you are protected from the race condition where DRBD might
>> drop out but the node starts heartbeating between then and when the
>> fencing timeout occurs.
>> Oh, and if you are going to use DRBD there is no reason to use LVM.
> This is interesting approach. I understand that DRBD with GFS2 doesn't
> require LVM between, but it does bring some inflexibility:
> * For each logical volume, one has to setup separate DRBD
Can you elaborate what you are referring to? Partitions? There is
technically nothing stopping you from partitioning a DRBD device. Also
depending on what you are doing, you may find that having one DRBD
device per disk is preferable in terms of performance and reliability to
having a mirrored pool (effectively RAID01). Pool of mirrors (RAID10) is
> * Cluster wide logical volume resizing not easy
Are you really going to spoon-feed the space expansions that much, thus
causing unnecessary fragmentation? If you size your storage sensibly,
you won't need to upgrade it for a few years, and when the time comes to
upgrade it you may well need to replace the servers while you're at it.
Volume resizing is, IMO, over-rated and unnecessary in most cases,
except where data growth is quite mind-boggling (in which case you won't
be using MySQL anyway).
> * No snapshot - this is very important to me for MySQL backups.
Last I checked CLVM couldn't do snapshots, but that may have changed
recently. Snapshots also aren't even remotely ideal for MySQL backups.
You really need a replicated server to take reliable backups from.
> What is main reason for you not to use LVM on top of DRBD? Is it just
> that you didn't require benefits it brings? Or, it makes more problems
> by your opinion?
Traditionally, CLVM didn't provide any tangible benefits (no snapshots),
and I never found myself in a situation where dynamically growing a
volume with randomly assembled storage was required. If you are JBOD-ing
a bunch of cheap SATA disks, you might as well size the storage
correctly to begin with and not have to bother with LVM. I'm assuming
this is what you are doing since you are doing it on the cheap
(SAN-less). If you are using a SAN, the SAN will provide functionality
to grow the exported block device and you can just grow the fs onto
that, without needing LVM.
So apart from snapshots (non-clustered) or a setup like what was
suggested earlier, to have DRBD on top of local LVM to gain
local-consistency snapshot capability in a cluster (not sure I'd trust
that with my data, but it may be good for non-production environments),
I don't really see the advantage. Snapshots also only give you
crash-level consistency, which I never felt was good enough for
applications like databases. A replicated slave that you can shut down
is generally a more reliable solution for backups.
More information about the Linux-cluster