[olpc-software] welcome to the olpc software mailing list
Jim Gettys
jg at freedesktop.org
Mon Mar 6 19:08:31 UTC 2006
On Mon, 2006-03-06 at 13:39 -0500, Alan Cox wrote:
> On Mon, Mar 06, 2006 at 01:19:54PM -0500, Jim Gettys wrote:
> > Ah, but I want OOM: I just need to be able to control what gets killed,
> > possibly warning that application long enough in advance it could save
> > its state for a later transparent restart... The kernel has no insight
> > into that sort of behavior.
>
> At the point we are OOM we cannot deliver a message to an application because
> we are OOM and we would need memory to do it.
I understand... The deadlock is obvious to the great unwashed.
I just need some warning; and we can allocate a very small swap area we
don't use seriously, so that such memory can be had (at least usually).
Also, folks have mumbled about swapping to compressed memory as a
possible solution.
And it may be a solution where we tell the kernel in advance what
process to OOM may be reasonable (so you can "kill the right thing"
without waiting). In this solution, I somehow need to be able to
monitor memory consumption (and maybe slow things down) as things get
tight, to give some processes some warning to save state.
So there is a large design space to explore. I don't know which
route(s) the Brazilians are currently exploring.
>
> In the no OOM configuration the program asking to allocate memory gets refused
> and can act appropriately as a result of the received NULL return from malloc.
> Programs can also pre allocate address space if they need some buffers/objects
> for recovery.
Fixing everyone to "do the right think" when malloc fails is nearly
impossible, particularly in languages like C; what is more, any
particular application doesn't have the correct information (only the WM
knows for sure ;-)). The X server tried to be reengineered (with some
success) to be robust in the malloc failed case, and it was a lot of
work (C is not a pretty language).
I wish C had exceptions; it doesn't. It is the mallocs buried down
inside libraries that get really hairy.
Regards,
- Jim
More information about the olpc-software
mailing list