[Cluster-devel] [GFS2 PATCH 1/3] gfs2: free quota data struct during evict, not file close

Andreas Gruenbacher agruenba at redhat.com
Wed Feb 26 20:08:19 UTC 2020


On Wed, Feb 26, 2020 at 4:49 PM Bob Peterson <rpeterso at redhat.com> wrote:
> Before this patch, whenever a file was closed, if it was the last
> process out, it freed the quota data via gfs2_qa_delete(). However,
> that created race conditions between closers and other operations
> like chown that relied upon the structure. This patch moves the
> freeing of quota data to inode evict, and eliminates function
> gfs2_qa_delete in favor of just freeing the structure.

This will use sizeof(struct gfs2_qadata) == 264 more bytes for each
inode when quotas are used. I'm not categorically against that, but
maybe the locking can be fixed instead.

Please explain why we don't have the same race for reservations that
we have for quotas.

The other two patches look fine.

> Signed-off-by: Bob Peterson <rpeterso at redhat.com>
> ---
>  fs/gfs2/quota.c | 10 ----------
>  fs/gfs2/quota.h |  1 -
>  fs/gfs2/rgrp.c  |  1 -
>  fs/gfs2/super.c |  3 +++
>  4 files changed, 3 insertions(+), 12 deletions(-)
>
> diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
> index 43ffe5997098..15158b6c933b 100644
> --- a/fs/gfs2/quota.c
> +++ b/fs/gfs2/quota.c
> @@ -547,16 +547,6 @@ int gfs2_qa_alloc(struct gfs2_inode *ip)
>         return error;
>  }
>
> -void gfs2_qa_delete(struct gfs2_inode *ip, atomic_t *wcount)
> -{
> -       down_write(&ip->i_rw_mutex);
> -       if (ip->i_qadata && ((wcount == NULL) || (atomic_read(wcount) <= 1))) {
> -               kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata);
> -               ip->i_qadata = NULL;
> -       }
> -       up_write(&ip->i_rw_mutex);
> -}
> -
>  int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid)
>  {
>         struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode);
> diff --git a/fs/gfs2/quota.h b/fs/gfs2/quota.h
> index 765627d9a91e..4a4a09aee7d9 100644
> --- a/fs/gfs2/quota.h
> +++ b/fs/gfs2/quota.h
> @@ -16,7 +16,6 @@ struct gfs2_sbd;
>  #define NO_GID_QUOTA_CHANGE INVALID_GID
>
>  extern int gfs2_qa_alloc(struct gfs2_inode *ip);
> -extern void gfs2_qa_delete(struct gfs2_inode *ip, atomic_t *wcount);
>  extern int gfs2_quota_hold(struct gfs2_inode *ip, kuid_t uid, kgid_t gid);
>  extern void gfs2_quota_unhold(struct gfs2_inode *ip);
>
> diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
> index 2ee2f7d48bc1..7efc765ebc6a 100644
> --- a/fs/gfs2/rgrp.c
> +++ b/fs/gfs2/rgrp.c
> @@ -683,7 +683,6 @@ void gfs2_rsqa_delete(struct gfs2_inode *ip, atomic_t *wcount)
>         if ((wcount == NULL) || (atomic_read(wcount) <= 1))
>                 gfs2_rs_deltree(&ip->i_res);
>         up_write(&ip->i_rw_mutex);
> -       gfs2_qa_delete(ip, wcount);
>  }
>
>  /**
> diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
> index 693c6d13473c..3d36d8671cf6 100644
> --- a/fs/gfs2/super.c
> +++ b/fs/gfs2/super.c
> @@ -1410,6 +1410,9 @@ static void gfs2_evict_inode(struct inode *inode)
>  out:
>         truncate_inode_pages_final(&inode->i_data);
>         gfs2_rsqa_delete(ip, NULL);
> +       if (ip->i_qadata)
> +               kmem_cache_free(gfs2_qadata_cachep, ip->i_qadata);
> +       ip->i_qadata = NULL;
>         gfs2_ordered_del_inode(ip);
>         clear_inode(inode);
>         gfs2_dir_hash_inval(ip);
> --
> 2.24.1
>

Thanks,
Andreas





More information about the Cluster-devel mailing list