[Linux-cluster] RHEL 6 cluster filesystem resource and LVM snapshots

Jonathan Barber jonathan.barber at gmail.com
Wed Nov 24 14:27:53 UTC 2010


On 24 November 2010 09:48, Xavier Montagutelli
<xavier.montagutelli at unilim.fr> wrote:
> On Wednesday 24 November 2010 09:34:48 Jankowski, Chris wrote:
>> Xavier,
>>
>> Thank you for the explanation.
>> This all makes sense.
>>
>> One more question about one of the documents you pointed me to:
>>
>> What does this do exactly and why do I need it:
>>
>> Quote:
>>
>> 4) Update your initrd on all your cluster machines. Example:
>> prompt> new-kernel-pkg --mkinitrd \
>>         --initrdfile=/boot/initrd-halvm-`uname -r`.img --install `uname -r`
>>
>> Unquote
>
> Caution : the following are only supposition, because I haven't read the
> lvm.sh script.
>
> In step 3 of http://sources.redhat.com/cluster/wiki/LVMFailover , you have to
> modify your lvm.conf file, "volume_list" parameter, to filter the VG/LV that can
> be activated on a particular host.
>
> In step 4, they say to create a new initrd : I suppose this step is necesary
> to include the modified lvm.conf file inside the initrd, and to NOT activate the
> VG located on the shared storage at boot time.
>
> The lvm.sh script must add or remove the "good" tag (i.e. a tag matching the
> hostname of the node running the service) on the fly.
>
> If someone can confirm or give additional pointers ?

That's how I understand it.

I've used LVM on RHEL5 *without* clvmd and not had any problems with
corruption, etc., but I haven't used snapshots. You have to try
really, really hard to break the tagged LVM config from the command
line, as the tags prevent activation (which also prevents you from
accidentally mounting the FS on different nodes). It's worth knowing
about the "--config" argument to the LVM commands, and how to active
the LVs from the command line so you can do maintenance to the tagged
VG/LVs outside of RHCS:
$ lvchange --config "activation { volume_list = [ '@$HOSTNAME' ] }"
vg00/test -a y

I am not a LVM hacker, so take the following comments with the
appropriate caution:
If you only ever make changes to the LVM on the active node, I think
you'd have to be really unlucky to suffer corruption due to stale LVM
metadata. Although if you're carrying out long running tasks like
relocating PEs, it might be worth freezing the service (I think this
is still probably overly cautious). I think if you start making
changes on multiple nodes at the same time, you will suffer badly (but
the tags should stop this from happening accidentally).

If you are activating LV resources on the basis of their VG, LVM
snapshots should survive the resource being relocated between nodes;
when the VG is deactivated on the original node, both the original and
snapshots LV will be deactivated at the same time, so you won't miss
any writes in the snapshot.

In my RHEL6 test environment, I just created a two node cluster with
cman/clvmd and could create a snapshot on the LVs in a shared VG. This
fails under RHEL5. I'm not sure I'd trust it to actually work
though...

It's probably worth directing some of your questions at the LVM list
for a more definitive answer:
https://www.redhat.com/mailman/listinfo/linux-lvm

PS: I'm sure you will, but you should test it :)

Cheers

>>
>> Regards,
>>
>> Chris Jankowski
> --
> Xavier Montagutelli                      Tel : +33 (0)5 55 45 77 20
> Service Commun Informatique              Fax : +33 (0)5 55 45 75 95
> Universite de Limoges
> 123, avenue Albert Thomas
> 87060 Limoges cedex

-- 
Jonathan Barber <jonathan.barber at gmail.com>




More information about the Linux-cluster mailing list