[Cluster-devel] How can be metadata(e.g., inode) in the GFS2 file system shared between client nodes?

Andrew Price anprice at redhat.com
Fri Aug 9 11:26:35 UTC 2019


On 09/08/2019 12:01, Daegyu Han wrote:
> Thank you for your reply.
> 
> If what I understand is correct,
> In a gfs2 file system shared by clients A and B, if A creates /foo/a.txt,
> does B re-read the filesystem metadata area on storage to keep the data
> consistent?

Yes, that's correct, although 'clients' is inaccurate as there is no 
'server'. Through the locking mechanism, B would know to re-read block 
allocation states and the contents of the /foo directory, so a path 
lookup on B would then find a.txt.

> After all, what makes gfs2 different from local filesystems like ext4,
> because of lock_dlm?

Exactly.

> In general, if we mount an ext4 file system on two different clients and
> update the file system on each client, we know that the file system state
> is not reflected in each other.

Yes.

Cheers,
Andy

> Thank you,
> Daegyu
>> 
> 2019년 8월 9일 (금) 오후 7:50, Andrew Price <anprice at redhat.com>님이 작성:
> 
>> Hi Daegyu,
>>
>> On 09/08/2019 09:10, 한대규 wrote:
>>> Hi, I'm Daegyu from Sungkyunkwan University.
>>>
>>> I'm curious how GFS2's filesystem metadata is shared between nodes.
>>
>> The key thing to know about gfs2 is that it is a shared storage
>> filesystem where each node mounts the same storage device. It is
>> different from a distributed filesystem where each node has storage
>> devices that only it accesses.
>>
>>> In detail, I wonder how the metadata in the memory of the node mounting
>> GFS2
>>> looks the consistent filesystem to other nodes.
>>
>> gfs2 uses dlm for locking of filesystem metadata among the nodes. The
>> transfer of locks between nodes allows gfs2 to decide when its in-memory
>> caches are invalid and require re-reading from the storage.
>>
>>> In addition, what role does corosync play in gfs2?
>>
>> gfs2 doesn't communicate with corosync directly but it operates on top
>> of a high-availability cluster. corosync provides synchronization and
>> coherency for the cluster. If a node stops responding, corosync will
>> notice and trigger actions (fencing) to make sure that node is put back
>> into a safe and consistent state. This is important in gfs2 to prevent
>> "misbehaving" nodes from corrupting the filesystem.
>>
>> Hope this helps.
>>
>> Cheers,
>> Andy
>>
>>
>>
> 




More information about the Cluster-devel mailing list