[Crash-utility] crash-7.3.2 very long list iteration progressively increasing memory usage

David Wysochanski dwysocha at redhat.com
Tue Jun 26 14:54:57 UTC 2018


On Tue, 2018-06-26 at 15:34 +0100, Jeremy Harris wrote:
> On 06/26/2018 03:29 PM, David Wysochanski wrote:
> > On Tue, 2018-06-26 at 09:21 -0400, Dave Anderson wrote:
> > > Yes, by default all list entries encountered are put in the built-in
> > > hash queue, specifically for the purpose of determining whether there
> > > are duplicate entries.  So if it's still running, it hasn't found any.
> > > 
> > > To avoid the use of the hashing feature, try entering "set hash off"
> > > before kicking off the command.  But of course if it finds any, it
> > > will loop forever.
> > > 
> > 
> > Ah ok yeah I forgot about the built-in list loop detection!
> 
> For a storage-less method of list loop-detection: run two walkers
> down the list, advancing two versus one elements.  If you ever
> match the same element location after starting, you have a loop.

I agree some algorithm [1] without a hash table may be better
especially for larger lists.

I also found that ctrl-c of the very long running crash list command
did not release the hash table memory - I had to exit crash for that.

[1] https://en.wikipedia.org/wiki/Cycle_detection




More information about the Crash-utility mailing list