[Linux-cluster] What is the order of processing a lock request?

Christine Caulfield ccaulfie at redhat.com
Wed May 14 09:39:40 UTC 2008

Ja S wrote:
> --- Christine Caulfield <ccaulfie at redhat.com> wrote:
>> Ja S wrote:
>>>>>> If the node doesn't have a local lock on
>>>>>> the resource then it
>>>>>> doesn't "know" it and has to ask the directory
>>>>>> node where it is mastered. 
>>>>> Does it mean even if the node owns the master
>> lock
>>>>> resource but it doesn't have a local lock
>>>>> associated with the master lock resource, it
>>>>> still needs to ask the directory node?
>>>> hash tables, hash tables, hash tables ;-)
>>> Sure. Now I see what do you mean "knows". Thanks.
>>> Could you please kindly answer my last question
>> above?
>> The answer is "No" ... because it's in the resource
>> hash table.
>> ... see, I told you it was all hash tables ...
> OK. Let's summarise what I have learned from you. If I
> am wrong, correct me please.
> A node has a hash table (HT1) which hold the master
> lock resources and local copies of master lock
> resources on remote nodes. It also has another hash
> table (HT2) which holds the content of the lock
> directory.
> When an application on a node A requests a lock on a
> file, DLM feeds the inode number of the file into a
> hash function and uses the returned hash value to
> check whether there is a corresponding lock resource
> record in the hash table HT1. If the record exists,
> DLM then processes the lock request on the lock
> resources (either master or local copy). 
> If not, DLM feeds the inode number into another hash
> function to obtain a node ID (for example node B)
> which holds the master node information of the target
> lock resource. DLM then talks with node B and gets the
> master node ID (for example node C) from the hash
> table HT2 on node B. Finally, DLM gets the target lock
> resource from the hash table HT1 on the node C and
> processes the lock request.
> Am I right this time, or still missing something (a
> third hash table?) ?

No, that's correct. It's missing a lot of detail, but the overview is fair.

There's a conflation you've done there that is OK for a simplisitic
discussion of GFS but hides an important abstraction.

The DLM does not deal in inode numbers, it only deals in resource names.
The application that uses the DLM (this includes GFS) decides what the
resource names are. GFS uses some system I don't know about but looks
like it might include the inode number. clvmd (for example) uses LV
UUIDs or VG names for its resource names for instance.

These resources are isolated from each other in separate lockspaces.
Lockspace is a mandatory parameter to all locking calls.


More information about the Linux-cluster mailing list