[Cluster-devel] [PATCH 3/8] gfs2: Clean up glock work enqueuing

Steven Whitehouse swhiteho at redhat.com
Fri Jun 2 09:29:19 UTC 2017


Hi,


On 02/06/17 10:19, Andreas Gruenbacher wrote:
> On Thu, Jun 1, 2017 at 5:46 PM, Steven Whitehouse <swhiteho at redhat.com> wrote:
>> [...] Can we not just have a version of gfs2_glock_put that takes
>> the number of refs to drop in case that we really have to count them up like
>> that?
>>
>> gfs2_glock_put_n(gl, drop_refs)
> Not so easily: there is no function that would drop more than one reference
> from a lockref, and adding one doesn't seem very economical.  We could
> open-code the entire lockref dropping here though; it's not too horrible.
>
> Thanks,
> Andreas
>
> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
> index 1e7b5f2..4733ec2 100644
> --- a/fs/gfs2/glock.c
> +++ b/fs/gfs2/glock.c
> @@ -178,20 +178,11 @@ static void gfs2_glock_queue_work(struct gfs2_glock *gl, unsigned long delay) {
>   	spin_unlock(&gl->gl_lockref.lock);
>   }
>   
> -/**
> - * gfs2_glock_put() - Decrement reference count on glock
> - * @gl: The glock to put
> - *
> - */
> -
> -void gfs2_glock_put(struct gfs2_glock *gl)
> +static void __gfs2_glock_put(struct gfs2_glock *gl)
>   {
>   	struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
>   	struct address_space *mapping = gfs2_glock2aspace(gl);
>   
> -	if (lockref_put_or_lock(&gl->gl_lockref))
> -		return;
> -
>   	lockref_mark_dead(&gl->gl_lockref);
>   
>   	spin_lock(&lru_lock);
> @@ -206,6 +197,20 @@ void gfs2_glock_put(struct gfs2_glock *gl)
>   }
>   
>   /**
> + * gfs2_glock_put() - Decrement reference count on glock
> + * @gl: The glock to put
> + *
> + */
> +
> +void gfs2_glock_put(struct gfs2_glock *gl)
> +{
> +	if (lockref_put_or_lock(&gl->gl_lockref))
> +		return;
> +
> +	__gfs2_glock_put(gl);
> +}
> +
> +/**
>    * may_grant - check if its ok to grant a new lock
>    * @gl: The glock
>    * @gh: The lock request which we wish to grant
> @@ -655,19 +660,18 @@ static void glock_work_func(struct work_struct *work)
>   			delay = 0;
>   		__gfs2_glock_queue_work(gl, delay);
>   	}
> -	if (drop_refs > 1) {
> -		gl->gl_lockref.count -= drop_refs - 1;
> -		drop_refs = 1;
> -	}
> -	spin_unlock(&gl->gl_lockref.lock);
>   
>   	/*
> -	 * Drop the remaining glock reference after grabbing (and releasing)
> -	 * the lockref spinlock: this allows tasks to safely drop glock
> -	 * references under that spinlock (see __gfs2_glock_queue_work).
> +	 * Drop the remaining glock references manually here. (Mind that
> +	 * __gfs2_glock_queue_work depends on the lockref spinlock begin held
> +	 * here as well.)
>   	 */
> -	if (drop_refs)
> -		gfs2_glock_put(gl);
> +	gl->gl_lockref.count -= drop_refs;
> +	if (!gl->gl_lockref.count) {
> +		__gfs2_glock_put(gl);
> +		return;
> +	}
> +	spin_unlock(&gl->gl_lockref.lock);
>   }
>   
>   /**

Yes, that is definitely an improvement over the original, and looks much 
less confusing I think. That seems like a reasonable solution to me,

Steve.




More information about the Cluster-devel mailing list