[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]

Re: Memory allocation jump after running for a while with a largenumber of threads



Ingo Molnar wrote:

On Wed, 19 Feb 2003, Hui Huang wrote:

What's interesting are numerous 992K map'ed memory regions (Note there
is a 1M hole between every 992K chunk and the next 32K):

[...]
bdf00000 (32 KB) rw-p (00:00 0)
bdf08000 (992 KB) ---p (00:00 0)


there might be another effect. If this is the thread stack (is it?),

I don't know what it is. My assumption is that it's not a stack, since stacks should not be larger than 128K, and the output does contain a large number of the 128K chunks that I assume correspond to actual stacks.

then Linux will lazy-allocate the pages mapped by it, and NPTL will cycle the
stacks (ie. instead of munmap()-ing them, they get cached). I dont
remember the exact tresholds NPTL is using for caching stacks.

Perhaps Ulrich could help here?

In any case, the RSS of the JVM process/threads should show the exact amount of
memory allocated.

if you add up the memory maps of the JVM, how much RAM is it, and how big
is the RSS [in the 'good' and in the 'bad' cases]?

Adding up the memory maps amounts to the VSZ size as reported by `ps', doesn't it? Anyway, currently the VSZ is 1414232K (i.e. 1.35G), and the RSS is 307652. For this number of threads, the pre-jump VSZ would be about 600M. RSS is, as far as I can tell, unaffected by the jumps.

To show this, I've attached a small graph of the chat memory usage. The green line is VSZ, and the blue line is RSS. The first jump was on Wednesday, the second one on Tuesday. The server was restarted on Friday, hence the drop on that date. The graph shows that there is no apparent correspondence between the jump and the RSS.

PNG image


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]