[Linux-cluster] GFS

tomc at teamics.com tomc at teamics.com
Thu Oct 28 18:53:54 UTC 2004


I thought that using flock() on a machine with a GFS patched kernel 
included some mojo that made flock() cluster aware.  Is this not the case? 
 

If flock() is not GFS-aware, what should be the preferred mechanism for 
shared locks and exclusive locks?  (A reference into a manual would be 
good reply if someone knows the answer).


tc




"Kovacs, Corey J." <cjk at techma.com>
Sent by: linux-cluster-bounces at redhat.com
10/28/04 12:18 PM
Please respond to linux clistering
 
        To:     "linux clistering" <linux-cluster at redhat.com>
        cc:     (bcc: Tom Currie/teamics)
        Subject:        RE: [Linux-cluster] GFS


It is,  but no filesystem is responsible for managing how an application
reads and writes to files. The application mut be aware of the possebility
of another instance, on another machine writing to the files and 
coordinating
reads and writes, cluster-wide among application instances. 

Corey 

-----Original Message-----
From: linux-cluster-bounces at redhat.com
[mailto:linux-cluster-bounces at redhat.com] On Behalf Of Mohamed Magdi Abbas
Sent: Thursday, October 28, 2004 12:11 PM
To: linux clistering
Subject: Re: [Linux-cluster] GFS

David Teigland wrote:
> On Tue, Oct 26, 2004 at 04:41:29PM -0400, Dascalu Dragos wrote:
> 
>>We are working on a similar scenario but adding mailman into the mix. 
>>The ideal outcome would be for multiple mailman/postfix servers to 
>>write archives, etc to the same centralized location on a SAN. After 
>>doing some tests this setup does not appear to be trivial. We ran into 
>>a similar problem when using NFS; if multiple machines write to the 
>>same file at the same time the file gets mangled as the machines cut 
>>each other off. With GFS we noticed that each machine has a 4k buffer 
>>window in which it writes its data. If a second process decides to 
>>start writing to the same file we noticed alternating writes to the 
>>file after 4k of data.
> 
> 
> Note that this sounds like perfectly correct behavior on the part of 
gfs.
> The application is responsible for the necessary file locking, of 
> course, while gfs is responsible for keeping the fs uncorrupted.
> 

I thought the idea of GFS is that it would handle locking to enable shared
filesystems among different nodes with simultaneous r/w access to the
filesystem.

Mohamed

--
Linux-cluster mailing list
Linux-cluster at redhat.com
http://www.redhat.com/mailman/listinfo/linux-cluster





More information about the Linux-cluster mailing list