[Cluster-devel] [GFS2 PATCH] GFS2: Increase i_writecount during gfs2_setattr_size

Steven Whitehouse swhiteho at redhat.com
Wed May 29 10:44:41 UTC 2013


Hi,

On Tue, 2013-05-28 at 12:54 -0400, Bob Peterson wrote:
> ----- Original Message -----
> | > --- a/fs/gfs2/rgrp.c
> | > +++ b/fs/gfs2/rgrp.c
> | > @@ -638,8 +638,10 @@ void gfs2_rs_deltree(struct gfs2_blkreserv *rs)
> | >   */
> | >  void gfs2_rs_delete(struct gfs2_inode *ip)
> | >  {
> | > +	struct inode *inode = &ip->i_inode;
> | > +
> | >  	down_write(&ip->i_rw_mutex);
> | > -	if (ip->i_res) {
> | > +	if (ip->i_res && atomic_read(&inode->i_writecount) <= 1) {
> | >  		gfs2_rs_deltree(ip->i_res);
> | >  		BUG_ON(ip->i_res->rs_free);
> | >  		kmem_cache_free(gfs2_rsrv_cachep, ip->i_res);
> | > 
> | 
> | Are there any other callers of gfs2_rs_delete where it is no appropriate
> | to have this new test?
> | 
> | I assume that the issue is that this writecount test needs to be under
> | the i_rw_mutex?
> | 
> | Steve.
> 
> Hi,
> 
> Nope. It's okay for reservations to go in and out of a rgrp reservations tree;
> it happens all the time. What we really need to protect is where it's
> freed from cache (kmem_cache_free) which only happens in function
> gfs2_rs_deltree, where it's now protected by this patch. And yes, it
> needs to be done under the i_rw_mutex.
> 
Ok, sounds good. I've applied the patch to the -nmw tree.

> The bigger question is whether there are other places besides functions
> gfs2_setattr_size and gfs2_page_mkwrite that should be calling
> get_write_access to ensure this protection. These are the only two
> we've seen in actual practice, and I wanted the patch to be as
> minimal as possible.
> 
> Regards,
> 
> Bob Peterson
> Red Hat File Systems

Yes, looks good. If there are other places, they are likely to be those
where we write to internal files via an interface other than the vfs.
The quotactl interface comes to mind, and perhaps also statfs too. From
userspace, if we have an open file descriptor, then there should
generally not be a problem,

Steve.





More information about the Cluster-devel mailing list