[Cluster-devel] [GFS2] Clean up internal read function

Steven Whitehouse swhiteho at redhat.com
Tue Oct 16 15:24:48 UTC 2007


Hi,

On Tue, 2007-10-16 at 09:16 -0500, Kevin Anderson wrote:
> On Tue, 2007-10-16 at 16:11 +0100, Steven Whitehouse wrote: 
> > Hi,
> > 
> > On Tue, 2007-10-16 at 08:54 -0500, Kevin Anderson wrote:
> > > Steve,
> > > 
> > > What is the impact on performance due to this change?  Seems to be
> > > messing with where locks are being grabbed and potentially the
> > > frequency of lock requests.  The performance problems related to
> > > writes were possibly caused by similar "cleanups", would not like that
> > > same effect on our reads.  What analysis has been done of the lock
> > > traffic and read throughput of these changes?
> > > 
> > > Kevin
> > 
> > 
> > It doesn't land up grabbing locks in different places at all. We still
> > do locking over the whole read for the internal read case and locking on
> > a per-page basis for "normal" readpage which is the same as before. The
> > difference is that with this new code we call different readpage
> > functions (where one is a wrapper that grabs locks for the other which
> > does the actual reading off disk) according to whether we already hold
> > the locks or not.
> > 
> > So we have the following two cases:
> > 
> >  1. readpage called from VFS (doesn't happen often, mostly readpages is
> > our read function)
> >     - same as before except that we don't need to test to see if the
> > function was called with glocks already held, as this now never happens
> > 
> >  2. readpage called from our internal read function
> >    - same as before except that we don't need to check to see if glocks
> > were already held, as this is always the case
> > 
> > Our internal read function isn't used often at all, so that it is not
> > performance critical really. The only time its called for any
> > significant amount of data is on mount when it reads the resource index,
> > or possibly again if the file system is expanded.
> > 
> > The reason for putting this in now (i.e. at the start of the queue for
> > the next merge window) is to give it maximum exposure before it gets
> > merged to mainline in case of any problems,
> > 
> 
> That's all nice, but have you done performance analysis of the impact
> of the change, ie run some benchmarks, measured the number of locks,
> and done a comparison?  I would like to see the data before accepting
> upstream "cleanup" requests.
> 
> Kevin

There didn't seem much point measuring the number of locks when the
basic flow of the locking hasn't changed. I have run postmark and also
on the tree I have here (which has some of the other patches in it as
well) various other mmap & splice tests. I have yet to notice any
difference in performance,

Steve.





More information about the Cluster-devel mailing list