[Cluster-devel] [GFS2 PATCH] GFS2: Drop inadequate rgrps from the reservation tree
Steven Whitehouse
swhiteho at redhat.com
Tue Nov 5 11:04:00 UTC 2013
Hi,
Looks good. Thanks,
Steve.
On Mon, 2013-11-04 at 11:53 -0500, Bob Peterson wrote:
> Hi,
>
> This patch fixes a bug in the GFS2 block allocation code. The problem
> starts if a process already has a multi-block reservation, but for
> some reason, another process disqualifies it from further allocations.
> For example, the other process might set on the GFS2_RDF_ERROR bit.
> The process holding the reservation jumps to label skip_rgrp, but
> that label comes after the code that removes the reservation from the
> tree. Therefore, the no longer usable reservation is not removed from
> the rgrp's reservations tree; it's lost. Eventually, the lost reservation
> causes the count of reserved blocks to get off, and eventually that
> causes a BUG_ON(rs->rs_rbm.rgd->rd_reserved < rs->rs_free) to trigger.
> This patch moves the call to after label skip_rgrp so that the
> disqualified reservation is properly removed from the tree, thus keeping
> the rgrp rd_reserved count sane.
>
> Regards,
>
> Bob Peterson
> Red Hat File Systems
>
> Signed-off-by: Bob Peterson <rpeterso at redhat.com>
> ---
> diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
> index 711e835..ddcddce 100644
> --- a/fs/gfs2/rgrp.c
> +++ b/fs/gfs2/rgrp.c
> @@ -1949,15 +1949,16 @@ int gfs2_inplace_reserve(struct gfs2_inode *ip, const struct gfs2_alloc_parms *a
> return 0;
> }
>
> - /* Drop reservation, if we couldn't use reserved rgrp */
> - if (gfs2_rs_active(rs))
> - gfs2_rs_deltree(rs);
> check_rgrp:
> /* Check for unlinked inodes which can be reclaimed */
> if (rs->rs_rbm.rgd->rd_flags & GFS2_RDF_CHECK)
> try_rgrp_unlink(rs->rs_rbm.rgd, &last_unlinked,
> ip->i_no_addr);
> skip_rgrp:
> + /* Drop reservation, if we couldn't use reserved rgrp */
> + if (gfs2_rs_active(rs))
> + gfs2_rs_deltree(rs);
> +
> /* Unlock rgrp if required */
> if (!rg_locked)
> gfs2_glock_dq_uninit(&rs->rs_rgd_gh);
>
More information about the Cluster-devel
mailing list