[Crash-utility] [PATCH 3/4] remove unreachable (and slow) code

Greg Thelen gthelen at google.com
Fri Apr 6 15:15:03 UTC 2018


task_exists() scans known tasks for a given task address.

refresh_radix_tree_task_table() looks like:
  hq_open()
  for cpus {
    hq_enter(per_cpu_idle_thread)
  }
  for ... {
    hq_enter(task)
  }
  hq_close()

  tt->running_tasks = 0
  for task in retrieve_list(hq_entries) {
    if (task_exists()) {
       duplicate task address
       continue
    }
    add_context()
  }

Because retrieve_list returns unique tasks, the above task_exists()
check will never find a duplicate task.  So remove the check.  This
converts the above loop from O(N^2) to O(N) which saves considerable
startup time when there are a large number of tasks.  On a 1M task dump
this patch reduces startup time: 44m => 1m.

I was not able to test the other pid table routines, but I suspect they
may also benefit from similar removal.

Signed-off-by: Greg Thelen <gthelen at google.com>
---
 task.c | 10 ----------
 1 file changed, 10 deletions(-)

diff --git a/task.c b/task.c
index 82fa015805cb..cddc1f5b651a 100644
--- a/task.c
+++ b/task.c
@@ -2473,16 +2473,6 @@ retry_radix_tree:
 			goto retry_radix_tree;
 		}
 
-		if (task_exists(*tlp)) {
-			error(WARNING,
-		           "%sduplicate task address found in task list: %lx\n",
-				DUMPFILE() ? "\n" : "", *tlp);
-			if (DUMPFILE())
-				continue;
-			retries++;
-			goto retry_radix_tree;
-		}
-
 		if (!(tp = fill_task_struct(*tlp))) {
 			if (DUMPFILE())
 				continue;
-- 
2.17.0.484.g0c8726318c-goog




More information about the Crash-utility mailing list