[Linux-cluster] CLVM clarification

Angelo Compagnucci angelo.compagnucci at gmail.com
Thu Oct 2 15:15:41 UTC 2008


Sorry, but I have not clear the clvm role.
CLVM shares VG metadata over a cluster and makes possible a wide cluster
administration (RedHat documentation says).

In this way a CLVM Cluster must have a CMAN Cluster up and running.

So, if I have already a shared storage, the only thing I can do is to make a
GFS filesystem and export this one to clients machine. In this way the
shared storage could be accessed by multiples machines.

In this scenario clvm is not useful because the shared lock on filesystem is
guaranted by GFS.

Let's pose I have different machines that I want to join in a cluster. Each
machine has a storage that I want share with the other machines to create a
large storage.

With CLVM, stands to RedHat guide, I can create a cluster that "presenting
each cluster computer with the same view of the logical volumes".[1]

So I have:

node 1:
  VG1  (local)
  VG2  (node2)
  VG3  (node3)

node 2:
  VG1  (node1)
  VG2  (local)
  VG3  (node3)

node 3:
  VG1  (node1)
  VG2  (node2)
  VG3  (local)

This should be what the RedHat CLVM guide stands for "the same view of the
logical volumes"

>From this point, Node1 is the shared storage. In this example it is visible
from alle the cluster's nodes.

So if I stroke an "lvcreate", I have to see the newly create LV on the other
nodes of the cluster.
It is true?

If this is true, gndb is not necessary and the layout becomes really simple.

Thanks for your time!

[1]
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/LVM_Cluster_Overview.html

2008/10/2 Xavier Montagutelli <xavier.montagutelli at unilim.fr>

> On Thursday 02 October 2008 12:28, Angelo Compagnucci wrote:
> > Ok, this could be clear, but in the Cluster_Logical_Volume_Manager.pdf
> I've
> > read (bottom of page 3):
> > "The clmvd daemon is the key clustering extension to LVM. The clvmd
> daemon
> > runs in each cluster computer and distributes LVM metadata updates in a
> > cluster, presenting each cluster computer with the same view of the
> logical
> > volumes"
> >
> > This is a picture of wath I have in mind:
>
> This picture doesn't show the difference between a GNBD server (which
> doesn't
> know anything about the use of the exported block device : it doesn't know
> the VG for example) and the GNBD clients (which actually use the block
> device
> as PV). May I add some layers ? Not exactly what I have in mind but I am
> not
> a ascii art expert :
>
>  ---------------------------
> |      GFS filesystem       |
>  ---------------------------
> |            LV             |
>  ---------------------------
> |            VG             |
>  ---------------------------
> |  PV1    |  PV2   |   PV3  |
> .---------.--------.--------.
> |             CLVM          |
> .---------.--------.--------.
> |  cluster basis (dlm,...)  |
> .---------.--------.--------.
> | Node4   | Node5  | Node6  |
> .---------.--------.--------.
> (Node4,5,6 have access to the three GNBD devices)
>        \    | |    /
>         \___|_|___/
>         /   | |   \
>        /    | |    \
>       /     | |     \
> .---------.--------.---------.
> | GNBD1   | GNBD2  | GNBD3   |
> .---------.--------.---------.
> | hda1    |  hda1  |   hda1  |
> | Node1   | Node2  |   Node3 |
> .---------.--------.---------.
>
> >
> > In this case the clvm features are not useful because there is only one
> > machine (that could not be a node of a cluster) that have the lvm over
> GNBD
> > exported devices. So the nodes doesn't know nothing about the other
> nodes.
>
> If your GNBD* devices are accessed by only one other node. But if the GNBD
> are
> served to multiple nodes (nodes4,5,6), then CLVM is useful.
>
> >
> > Let's pose this situation:
> >
> > -----------------------------------------------
> > |            GFS                           |
> > -----------------------------------------------
> > |                LV                          |
> > -----------------------------------------------
> > |         VG1         |        VG2      |
> > -----------------------------------------------
> > |         PV1         |        PV2       |
> > |      Node1         |      Node2      |
> > -----------------------------------------------
> > |        CLVM coordinates           |
> > -----------------------------------------------
> >
> > In this situatuation makes sense to have a clustered lvm because if I
> have
> > to make some maintenance over VGs, CLVM can lock and unlock the
> interested
> > device.
> >
> > Is this the correct behaviour??
>
> Perhaps I miss your point, but it doesn't make sense if the block devices
> are
> local to each node. How could Node2 have access to the block device on
> Node1
> (showed as PV1) ?
>
> CLVM is useful only when you have a shared storage.
>
> > In the contrary, which is the CLVM role in a cluster?
>
> >From what I know, CLVM protects the metadata parts of LVM on the shared
> storage. And when you make one operation the shared storage on one node
> (for
> example, create a new LV), all the nodes are aware of the change.
>
>
> >
> >
> > 2008/10/2 Xavier Montagutelli <xavier.montagutelli at unilim.fr>
> >
> > > On Wednesday 01 October 2008 17:39, Angelo Compagnucci wrote:
> > > > Hi to all,This is my first post on this list. Thanks in advance for
> > > > every answer.
> > > >
> > > > I've already read every guide in this matter, this is the list:
> > > >
> > > > Cluster_Administration.pdf
> > > > Cluster_Logical_Volume_Manager.pdf
> > > > Global_Network_Block_Device.pdf
> > > > Cluster_Suite_Overview.pdf
> > > > Global_File_System.pdf
> > > > CLVM.pdf
> > > > RedHatClusterAdminOverview.pdf
> > > >
> > > > The truth is that I've not clear a point about CLVM.
> > > >
> > > > Let's me make an example:
> > > >
> > > > In this example CLVM and the Cluster suite are fully running without
> > > > problems. Let's pose the same configuration of cluster.conf and
> > > > lvm.conf and the nodes of the cluster are joined and operatives.
> > >
> > > Does your example include a shared storage (GNBD, iSCSI, SAN, ...) ?
> > >
> > > > NODE1:
> > > >
> > > > pvcreate /dev/hda3
> > > >
> > > > NODE2:
> > > >
> > > > pvcreate /dev/hda2
> > > >
> > > > Let's pose that CLVM spans LVM metadata across the cluster, if I
> stroke
> > >
> > > the
> > >
> > > > command:
> > > >
> > > > pvscan
> > > >
> > > > I should see /dev/sda2 and /dev/sda3
> > > >
> > > > and then I can create a vg with
> > > >
> > > > vgcreate /dev/sda2 /dev/sda3 ...
> > > >
> > > > The question is: How LVM metadata sharing works? I have to use GNBD
> on
> > >
> > > the
> > >
> > > > row partion to share a device between nodes? I can create a GFS over
> a
> > > > spanned volume group? Are shareable only logical volumes?
> > >
> > > I have the feeling that something is not clear here. I am not an
> expert,
> > > but :
> > >
> > > GNBD is just a mean to export a block device on the IP network. A GNBD
> > > device
> > > is accessible to multiple nodes at the same time, and thus you can
> > > include that block device in a CLVM Volume Group. Instead of GNBD, you
> > > can also use any other shared storage (iSCSI, FC, ...). Be careful,
> from
> > > what I have understood, some SAN storage are not sharable between many
> > > hosts (NBD, AoE for example) !
> > >
> > > After that, you have the choice :
> > >
> > >  - to make one LV with a shared filesystem (GFS). You can then mount
> the
> > > same
> > > filesystem on many nodes at the same time.
> > >
> > >  - to make many LV with an ext3 / xfs / ... filesystem. But you then
> have
> > > to
> > > make sure that one LV is mounted on only one node at a given time.
> > >
> > > But the type of filesystem is independant, this is a higher component.
> > >
> > > In this picture, CLVM is only a low-level component, avoiding the
> > > concurrent
> > > access of many nodes on the LVM metadata written on the shared storage.
> > >
> > > The data are not "spanned" across the local storage of many nodes
> (well,
> > > I suppose you *could* do that, but you would need other tools / layers
> ?)
> > >
> > > Other point : if I remember correctly, the Red Hat doc says it's not
> > > recommended to use GFS on a node that exports a GNBD device. So if you
> > > use GNBD as a shared storage, I suppose it's better to specialize one
> or
> > > more nodes as GNBD "servers".
> > >
> > >
> > > HTH
> > >
> > > > Thanks for your answers!!
> > >
> > > --
> > > Xavier Montagutelli                      Tel : +33 (0)5 55 45 77 20
> > > Service Commun Informatique              Fax : +33 (0)5 55 45 75 95
> > > Universite de Limoges
> > > 123, avenue Albert Thomas
> > > 87060 Limoges cedex
> > >
> > > --
> > > Linux-cluster mailing list
> > > Linux-cluster at redhat.com
> > > https://www.redhat.com/mailman/listinfo/linux-cluster
>
> --
> Xavier Montagutelli                      Tel : +33 (0)5 55 45 77 20
> Service Commun Informatique              Fax : +33 (0)5 55 45 75 95
> Universite de Limoges
> 123, avenue Albert Thomas
> 87060 Limoges cedex
>
> --
> Linux-cluster mailing list
> Linux-cluster at redhat.com
> https://www.redhat.com/mailman/listinfo/linux-cluster
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20081002/4d54631b/attachment.htm>


More information about the Linux-cluster mailing list