Blob Blame History Raw
commit 7095c5f1ab7ab2d9e02c203c9966b65c09249e1f
Author: Bob Peterson <rpeterso@redhat.com>
Date:   Fri Dec 14 09:16:19 2018 -0500

    gfs2-utils: Wrong hash value used to clean journals
    
    When fsck.gfs2 sees a dirty journal, (one that does not have a
    log header with the UNMOUNT flag set at the wrap-point), it replays
    the journal and writes a log header out to "clean" the journal.
    Unfortunately, before this patch, it was using the wrong hash value.
    So every time fsck.gfs2 was run, it would not recognize its own
    log header because of the wrong hash, and therefore it would always
    see the journal as dirty with every run (until the file system is
    mounted and unmounted, which would write a new correct log header).
    Therefore, multiple runs of fsck.gfs2 would always result in a
    replay of the journal, which remains "dirty."
    
    This patch changes function clean_journal so that it uses the
    correct hash function. Therefore, the journal will be truly clean
    and consecutive runs (or mounts) will find the journal clean.
    
    Resolves: rhbz#1659490
    
    Signed-off-by: Bob Peterson <rpeterso@redhat.com>
    Signed-off-by: Andrew Price <anprice@redhat.com>

diff --git a/gfs2/libgfs2/recovery.c b/gfs2/libgfs2/recovery.c
index 6b14bf94..06f81116 100644
--- a/gfs2/libgfs2/recovery.c
+++ b/gfs2/libgfs2/recovery.c
@@ -241,7 +241,7 @@ int clean_journal(struct gfs2_inode *ip, struct gfs2_log_header *head)
 	lh->lh_sequence = cpu_to_be64(head->lh_sequence + 1);
 	lh->lh_flags = cpu_to_be32(GFS2_LOG_HEAD_UNMOUNT);
 	lh->lh_blkno = cpu_to_be32(lblock);
-	hash = gfs2_disk_hash((const char *)lh, sizeof(struct gfs2_log_header));
+	hash = lgfs2_log_header_hash(bh->b_data);
 	lh->lh_hash = cpu_to_be32(hash);
 	bmodified(bh);
 	brelse(bh);