[Crash-utility] crash-7.3.2 very long list iteration progressively increasing memory usage

David Wysochanski dwysocha at redhat.com
Tue Jun 26 14:29:07 UTC 2018


On Tue, 2018-06-26 at 09:21 -0400, Dave Anderson wrote:
> 
> ----- Original Message -----
> > Hi Dave,
> > 
> > We have a fairly large vmcore (around 250GB) that has a very long kmem
> > cache we are trying to determine whether a loop exists in it.  The list
> > has literally billions of entries.  Before you roll your eyes hear me
> > out.
> > 
> > Just running the following command
> > crash> list -H 0xffff8ac03c81fc28 > list-yeller.txt
> > 
> > Seems to increase the memory of crash usage over time very
> > significantly, to the point that we have the following with top output:
> >   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> > 25522 dwysocha  20   0 11.2g  10g 5228 R 97.8 17.5   1106:34 crash
> > 
> >                                                                        
> >                                               When I started the
> > command yesterday it was adding around 4 million entries to the file
> > per minute.  At the time I estimated the command would finish in around
> > 10 hours and I could use it to determine if there was a loop in the
> > list or not.  But today has slowed down to less than 1/10th that, to
> > around 300k entries per minute.
> > 
> > Is this type of memory  usage with list enumeration expected or not?
> > 
> > I have not yet begun to delve into the code, but figured you might have
> > a gut feel whether this is expected and fixable or not.
> 
> Yes, by default all list entries encountered are put in the built-in
> hash queue, specifically for the purpose of determining whether there
> are duplicate entries.  So if it's still running, it hasn't found any.
> 
> To avoid the use of the hashing feature, try entering "set hash off"
> before kicking off the command.  But of course if it finds any, it
> will loop forever.
> 

Ah ok yeah I forgot about the built-in list loop detection!

Probably if we increase the value of --hash when we start crash maybe
we can keep a constant rate of additions to the file and it may finish
in a reasonable amount of time.  Any recommendations on sizing that
parameter?

Then again I guess if the total list is larger than RAM we may get into
swapping.



> Dave
> 
> 
> > 
> > Thanks.
> > 
> > --
> > Crash-utility mailing list
> > Crash-utility at redhat.com
> > https://www.redhat.com/mailman/listinfo/crash-utility
> 
> --
> Crash-utility mailing list
> Crash-utility at redhat.com
> https://www.redhat.com/mailman/listinfo/crash-utility




More information about the Crash-utility mailing list