[Linux-cluster] Starter Cluster / GFS

Marti, Robert RJM002 at shsu.edu
Wed Nov 10 21:37:08 UTC 2010


> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-
> bounces at redhat.com] On Behalf Of Jankowski, Chris
> Sent: Wednesday, November 10, 2010 3:04 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Starter Cluster / GFS
> 
> Robert,
> 
> One reason is that with GFS2 you do not have to do fsck on the surviving
> node after one node in the cluster failed.
> 
> Doing fsck ona 20 TB filesystem with heaps of files may take well over an
> hour.
> 
> So, if you built your cluster for HA you'd rather avoid it.
> 
> The locks need to be recovered, but this is much faster operation and fairly
> time bound. Fsck is not.
> 
> Regards,
> 
> Chris Jankowski
> 
> -----Original Message-----
> From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-
> bounces at redhat.com] On Behalf Of Marti, Robert
> Sent: Thursday, 11 November 2010 07:51
> To: 'linux clustering'
> Subject: Re: [Linux-cluster] Starter Cluster / GFS
> 
> > -----Original Message-----
> > From: linux-cluster-bounces at redhat.com [mailto:linux-cluster-
> > bounces at redhat.com] On Behalf Of Nicolas Ross
> > Sent: Wednesday, November 10, 2010 1:32 PM
> > To: linux clustering
> > Subject: Re: [Linux-cluster] Starter Cluster / GFS
> >
> > > We had to make similar changes to our application.
> > >
> > > Avoid allowing two (or more) hosts to create small files in the same
> > > shared directory within a GFS filesystem.  That particular case
> > > scales poorly with GFS.
> > >
> > > If you can partition things so that two hosts will never create
> > > files in the same directory (we used a per-host directory structure
> > > for our application), or perhaps direct all write operations to one
> > > host while other hosts only read from GFS, it should perform well.
> >
> > Ok, I see. Our applications will read/write into its own directory
> > most of the time. In the rare cases when it'll be possible that 2
> > nodes read/writes to the same directory, it'll be for php sessions
> > files. If we ever need to reach to this stage, we'll have to make a
> > custom session handler to put them into a central memcached or
> something else...
> >
> 
> If that's the case, why look at shared storage at all?
> 
> --

In this scenario, he's not building the apps for HA (single server at a time, except maybe for sessions) he's not using massive filesystems (5-6TB total)...

The overhead involved in managing shared storage isn't typically worth it if you're not going to leverage the shared portion of it.

Rob Marti
 




More information about the Linux-cluster mailing list