[Cluster-devel] [PATCH] gfs2: Glock dump performance regression fix

Bob Peterson rpeterso at redhat.com
Thu Feb 1 19:00:44 UTC 2018


----- Original Message -----
| Restore an optimization removed in commit 7f19449553 "Fix debugfs glocks
| dump": keep the glock hash table iterator active while the glock dump
| file is held open.  This avoids having to rescan the hash table from the
| start for each read, with quadratically rising runtime.
| 
| In addition, use rhastable_walk_peek for resuming a glock dump at the
| current position: when a glock doesn't fit in the provided buffer
| anymore, the next read must revisit the same glock.
| 
| Finally, also restart the dump from the first entry when we notice that
| the hash table has been resized in gfs2_glock_seq_start.
| 
| Signed-off-by: Andreas Gruenbacher <agruenba at redhat.com>
| ---
Hi,

Thanks. This is now pushed to the for-next branch of the linux-gfs2 tree:
https://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2.git/commit/fs?h=for-next&id=7ac07fdaf840f9b141c6d5c286805107227c0e68

I did change one small thing: "} else" to "} else {" ... "}"

Regards,

Bob Peterson
Red Hat File Systems




More information about the Cluster-devel mailing list