[Cluster-devel] gfs2_journaled_truncate efficiency

Andreas Gruenbacher agruenba at redhat.com
Tue Dec 12 15:12:56 UTC 2017


Hi,

looking at gfs2_journaled_truncate, I noticed that the loop in there
can be awfully expensive for huge sparse jdata files. This is because
of two reasons:

(1) The loop goes from the truncate point to the file size, creating a
new transaction every GFS2_JTRUNC_REVOKES blocks (8192).

    We could easily avoid creating new transactions when the current
transaction is still empty. Ideally, we would use iomap_begin to map
the file and only go over allocated blocks, though.

(2) When the old file size is not block aligned, the truncate chunks
will not be block aligned, slowing things down and revoking partial
blocks twice.

    This could be easily addressed with a little bit of rounding code.

Is this a problem we should worry about?

(gfs2_journaled_truncate apparently isn't used for directories, so
this really seems to affect regular files with the 'j' file attribute
only.)

Thanks,
Andreas




More information about the Cluster-devel mailing list