[Cluster-devel] [GFS2 PATCH] GFS2: Adjust minimum hold times

Steven Whitehouse swhiteho at redhat.com
Thu Oct 18 13:27:13 UTC 2012


Hi,

For all the patches in this set, I'm a bit concerned about the
justification for making these changes. I'd like to be reasonably
certain that we are not simply improving one workload running on one
specific set of hardware at the expense of another, for example.

How much benefit do these actually give us? is it a lot, or just a small
fraction on one particular workload?

I'd like to see some analysis to say why the smaller min hold time, etc
is the right thing to do here. Or to maybe put it another way, if we
were to ask how would we ideally adjust the min hold time to give
maximum effect, given full knowledge of the system, is the answer
something that we might implement given the information on the state of
the system that is available on a single node?

If so then perhaps we should take a look at that rather than changing
these constants around, unless there is a particularly large gain to be
had here,

Steve.

On Wed, 2012-10-17 at 13:34 -0400, Bob Peterson wrote:
> Hi,
> 
> This patch adjusts the glock minimum hold times to smaller values.
> It also increases the hold time for each new holder record queued.
> 
> Regards,
> 
> Bob Peterson
> Red Hat File Systems
> 
> Signed-off-by: Bob Peterson <rpeterso at redhat.com> 
> ---
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index e543871..beaa834 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -985,6 +985,9 @@ fail:
>  	trace_gfs2_glock_queue(gh, 1);
>  	gfs2_glstats_inc(gl, GFS2_LKS_QCOUNT);
>  	gfs2_sbstats_inc(gl, GFS2_LKS_QCOUNT);
> +	/* For every new holder, add to the minimum hold time, up to the max */
> +	gl->gl_hold_time = min(gl->gl_hold_time + GL_GLOCK_PERHOLDER_INCR,
> +			       GL_GLOCK_MAX_HOLD);
>  	if (likely(insert_pt == NULL)) {
>  		list_add_tail(&gh->gh_list, &gl->gl_holders);
>  		if (unlikely(gh->gh_flags & LM_FLAG_PRIORITY))
> @@ -1079,6 +1082,9 @@ void gfs2_glock_dq(struct gfs2_holder *gh)
>  	if (gh->gh_flags & GL_NOCACHE)
>  		handle_callback(gl, LM_ST_UNLOCKED, 0);
>  
> +	/* One less holder, so adjust the minimum hold time down. */
> +	gl->gl_hold_time = max(gl->gl_hold_time - GL_GLOCK_PERHOLDER_INCR,
> +			       GL_GLOCK_MIN_HOLD);
>  	list_del_init(&gh->gh_list);
>  	if (find_first_holder(gl) == NULL) {
>  		if (glops->go_unlock) {
> diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
> index fd580b7..52068de 100644
> --- a/fs/gfs2/glock.h
> +++ b/fs/gfs2/glock.h
> @@ -113,11 +113,12 @@ enum {
>  
>  #define GLR_TRYFAILED		13
>  
> -#define GL_GLOCK_MAX_HOLD        (long)(HZ / 5)
> -#define GL_GLOCK_DFT_HOLD        (long)(HZ / 5)
> +#define GL_GLOCK_MAX_HOLD        (long)(HZ / 10)
> +#define GL_GLOCK_DFT_HOLD        (long)(HZ / 10)
>  #define GL_GLOCK_MIN_HOLD        (long)(10)
>  #define GL_GLOCK_HOLD_INCR       (long)(HZ / 20)
>  #define GL_GLOCK_HOLD_DECR       (long)(HZ / 40)
> +#define GL_GLOCK_PERHOLDER_INCR  (long)(2)
>  
>  struct lm_lockops {
>  	const char *lm_proto_name;
> 





More information about the Cluster-devel mailing list