[dm-devel] System freezes after OOM

Tetsuo Handa penguin-kernel at I-love.SAKURA.ne.jp
Thu Jul 14 11:01:27 UTC 2016


Michal Hocko wrote:
> OK, this is the part I have missed. I didn't realize that the swapout
> path, which is indeed PF_MEMALLOC, can get down to blk code which uses
> mempools. A quick code travers shows that at least
> 	make_request_fn = blk_queue_bio
> 	blk_queue_bio
> 	  get_request
> 	    __get_request
> 
> might do that. And in that case I agree that the above mentioned patch
> has unintentional side effects and should be re-evaluated. David, what
> do you think? An obvious fixup would be considering TIF_MEMDIE in
> mempool_alloc explicitly.

TIF_MEMDIE is racy. Since the OOM killer sets TIF_MEMDIE on only one thread,
there is no guarantee that TIF_MEMDIE is set to the thread which is looping
inside mempool_alloc(). And since __GFP_NORETRY is used (regardless of
f9054c70d28bc214), out_of_memory() is not called via __alloc_pages_may_oom().
This means that the thread which is looping inside mempool_alloc() can't
get TIF_MEMDIE unless TIF_MEMDIE is set by the OOM killer.

Maybe set __GFP_NOMEMALLOC by default at mempool_alloc() and remove it
at mempool_alloc() when fatal_signal_pending() is true? But that behavior
can OOM-kill somebody else when current was not OOM-killed. Sigh...

David Rientjes wrote:
> On Wed, 13 Jul 2016, Mikulas Patocka wrote:
> 
> > What are the real problems that f9054c70d28bc214b2857cf8db8269f4f45a5e23 
> > tries to fix?
> > 
> 
> It prevents the whole system from livelocking due to an oom killed process 
> stalling forever waiting for mempool_alloc() to return.  No other threads 
> may be oom killed while waiting for it to exit.

Is that concern still valid? We have the OOM reaper for CONFIG_MMU=y case.




More information about the dm-devel mailing list