[Cluster-devel] [GFS2 PATCH] flush the glock completely in inode_go_sync

Benjamin Marzinski bmarzins at redhat.com
Wed May 2 14:44:03 UTC 2007


Fix for bz #231910
When filemap_fdatawrite() is called on the inode mapping in data=ordered mode,
it will add the glock to the log. In inode_go_sync(), if you do the
gfs2_log_flush() before this, after the filemap_fdatawrite() call, the glock
and its associated data buffers will be on the log again. This means you can
demote a lock from exclusive, without having it flushed from the log. The
attached patch simply moves the gfs2_log_flush up to after the
filemap_fdatawrite() call.

Originally, I tried moving the gfs2_log_flush to after gfs2_meta_sync(), but
that caused me to trip the following assert.

GFS2: fsid=cypher-36:test.0: fatal: assertion "!buffer_busy(bh)" failed
GFS2: fsid=cypher-36:test.0:   function = gfs2_ail_empty_gl, file = fs/gfs2/glops.c, line = 61

It appears that gfs2_log_flush() puts some of the glocks buffers in the busy
state and the filemap_fdatawrite() call is necessary to flush them. This makes
me worry slightly that a related problem could happen because of moving the
gfs2_log_flush() after the initial filemap_fdatawrite(), but I assume that
gfs2_ail_empty_gl() would catch that case as well. 

Signed-off-by: Benjamin E. Marzinski <bmarzins at redhat.com>

-------------- next part --------------
diff -urpN --exclude-from=gfs2-2.6-nmw-070416-clean/Documentation/dontdiff gfs2-2.6-nmw-070416-clean/fs/gfs2/glops.c gfs2-2.6-nmw-070416-test/fs/gfs2/glops.c
--- gfs2-2.6-nmw-070416-clean/fs/gfs2/glops.c	2007-04-16 12:37:32.000000000 -0500
+++ gfs2-2.6-nmw-070416-test/fs/gfs2/glops.c	2007-04-29 11:37:55.000000000 -0500
@@ -156,9 +156,9 @@ static void inode_go_sync(struct gfs2_gl
 		ip = NULL;
 
 	if (test_bit(GLF_DIRTY, &gl->gl_flags)) {
-		gfs2_log_flush(gl->gl_sbd, gl);
 		if (ip)
 			filemap_fdatawrite(ip->i_inode.i_mapping);
+		gfs2_log_flush(gl->gl_sbd, gl);
 		gfs2_meta_sync(gl);
 		if (ip) {
 			struct address_space *mapping = ip->i_inode.i_mapping;


More information about the Cluster-devel mailing list