[Linux-cluster] pool and LVM, and changes since 2000

Michael Conrad Tadpol Tilstra mtilstra at redhat.com
Wed Oct 6 15:34:59 UTC 2004


On Wed, Oct 06, 2004 at 10:36:41AM -0400, Ed L Cashin wrote:
>   The memexp locking module that was new at the time of the 2000 OLS
>   talk was designed to use RAM exported by fancy storage hardware for
>   coordinating locking.  A single node could stand in, though, taking
>   the place of the fancy RAM-exporting storage hardware.  Today, most
>   GFS installations use DLM instead.

Almost.  Most gfs installations today use gulm, which is a fail-over
equiped lock server.  We have a DLM that will be available later.
(usable now from cvs if you want.)


>   Preslan mentions that after acquiring a lock, a node must "heartbeat
>   the drive" because the locking state is on the storage hardware.

Back when we kept trying to put the locking onto the harddrives, there
wasn't any cluster managers, but you still needed to track when nodes
died.  Dlock, for example had heartbeat timers per lock. (dlock was
before dmep)  With dmep, things were done a bit differently, but it was
the same idea.

> How is that done these days?  Does a lock owner heartbeat the lock
> master or does cluster management take care of this issue?

A cluster manager takes care of this now.  The core portion of gulm
tracks membership of the cluter.  For the DLM, we have a cluster manager
named cman.


-- 
Michael Conrad Tadpol Tilstra
Are they gonna debug the world before release?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://listman.redhat.com/archives/linux-cluster/attachments/20041006/58625617/attachment.sig>


More information about the Linux-cluster mailing list