[Cluster-devel] [gfs2-utils PATCH] gfs2-utils: Wrong hash value used to clean journals

Bob Peterson rpeterso at redhat.com
Fri Dec 14 14:16:19 UTC 2018


Hi,

When fsck.gfs2 sees a dirty journal, (one that does not have a
log header with the UNMOUNT flag set at the wrap-point), it replays
the journal and writes a log header out to "clean" the journal.
Unfortunately, before this patch, it was using the wrong hash value.
So every time fsck.gfs2 was run, it would not recognize its own
log header because of the wrong hash, and therefore it would always
see the journal as dirty with every run (until the file system is
mounted and unmounted, which would write a new correct log header).
Therefore, multiple runs of fsck.gfs2 would always result in a
replay of the journal, which remains "dirty."

This patch changes function clean_journal so that it uses the
correct hash function. Therefore, the journal will be truly clean
and consecutive runs (or mounts) will find the journal clean.

Signed-off-by: Bob Peterson <rpeterso at redhat.com>
---
 gfs2/libgfs2/recovery.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gfs2/libgfs2/recovery.c b/gfs2/libgfs2/recovery.c
index 6b14bf94..06f81116 100644
--- a/gfs2/libgfs2/recovery.c
+++ b/gfs2/libgfs2/recovery.c
@@ -241,7 +241,7 @@ int clean_journal(struct gfs2_inode *ip, struct gfs2_log_header *head)
 	lh->lh_sequence = cpu_to_be64(head->lh_sequence + 1);
 	lh->lh_flags = cpu_to_be32(GFS2_LOG_HEAD_UNMOUNT);
 	lh->lh_blkno = cpu_to_be32(lblock);
-	hash = gfs2_disk_hash((const char *)lh, sizeof(struct gfs2_log_header));
+	hash = lgfs2_log_header_hash(bh->b_data);
 	lh->lh_hash = cpu_to_be32(hash);
 	bmodified(bh);
 	brelse(bh);




More information about the Cluster-devel mailing list