[Cluster-devel] [PATCH 7/8] gfs2: gfs2_evict_inode: Put glocks asynchronously

Andreas Gruenbacher agruenba at redhat.com
Fri Jun 2 13:57:23 UTC 2017


On Thu, Jun 1, 2017 at 5:37 PM, Steven Whitehouse <swhiteho at redhat.com> wrote:
> Hi,
>
> comments below...
>
>
>
> On 31/05/17 16:03, Andreas Gruenbacher wrote:
>>
>> gfs2_evict_inode is called to free inodes under memory pressure.  The
>> function calls into DLM when an inode's last cluster-wide reference goes
>> away (remote unlink) and to release the glock and associated DLM lock
>> before finally destroying the inode.  However, if DLM is blocked on
>> memory to become available, calling into DLM again will deadlock.
>>
>> Avoid that by decoupling releasing glocks from destroying inodes in that
>> case: with gfs2_glock_queue_put, glocks will be dequeued asynchronously
>> in work queue context, when the associated inodes have most likely
>> already been destroyed.
>>
>> With this change, it appears that inodes can end up being unlinked,
>> remote-unlink can be triggered, and then the inode can be reallocated
>> before all remote-unlink callbacks are processed.  Revalidate the link
>> count in gfs2_evict_inode to make sure we're not destroying an
>> allocated, referenced inode.
>>
>> In addition, skip remote unlinks under memory pressure; the next inode
>> allocation in the same resource group will take care of destroying
>> unlinked inodes.
>>
>> Signed-off-by: Andreas Gruenbacher <agruenba at redhat.com>
>> ---
>>   fs/gfs2/glock.c | 10 +++++++++-
>>   fs/gfs2/glock.h |  2 ++
>>   fs/gfs2/super.c | 30 ++++++++++++++++++++++++++++--
>>   3 files changed, 39 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
>> index e6d32d2..4ba53e9 100644
>> --- a/fs/gfs2/glock.c
>> +++ b/fs/gfs2/glock.c
>> @@ -170,7 +170,7 @@ void gfs2_glock_free(struct gfs2_glock *gl)
>>    *
>>    */
>>   -static void gfs2_glock_hold(struct gfs2_glock *gl)
>> +void gfs2_glock_hold(struct gfs2_glock *gl)
>>   {
>>         GLOCK_BUG_ON(gl, __lockref_is_dead(&gl->gl_lockref));
>>         lockref_get(&gl->gl_lockref);
>> @@ -269,6 +269,14 @@ void gfs2_glock_put(struct gfs2_glock *gl)
>>         sdp->sd_lockstruct.ls_ops->lm_put_lock(gl);
>>   }
>>   +/*
>> + * Cause the glock to be put in work queue context.
>> + */
>> +void gfs2_glock_queue_put(struct gfs2_glock *gl)
>> +{
>> +       gfs2_glock_queue_work(gl, 0);
>> +}
>> +
>
> This is confusing. If the work queue is already scheduled, then it will
> simply drop the ref count on the glock by one directly and not reschedule
> the work item. That means that if the ref count were to hit zero, then that
> state would potentially by missed, maybe that can't happen in this case,
> but it isn't clear that it can't at first glance.

gfs2_glock_queue_work is meant to make sure glock_work_func runs at
least once in the future, and consumes one glock reference. Always.

In case you're wondering why decrementing the ref count in
__gfs2_glock_queue_work is safe: inside __gfs2_glock_queue_work, we
are holding the glock spin lock. glock_work_func grabs that spin lock
before dropping extra references, and it owns at least one reference.
So when queue_delayed_work tells us that the work is already queued,
we know that glock_work_func cannot get to the point of dropping its
references, and therefore the reference we drop in
__gfs2_glock_queue_work cannot be the last one.

Thanks,
Andreas




More information about the Cluster-devel mailing list