[darcs-users] Current Pull Attempting to Overallocate

John Meacham john at repetae.net
Wed Jan 28 22:12:01 UTC 2004

On Wed, Jan 28, 2004 at 12:13:23PM -0500, David Roundy wrote:
> On Wed, Jan 28, 2004 at 08:09:26AM -0800, Paul Snively wrote:
> > *** malloc: vm_allocate(size=524288000) failed (error code=3)
> > *** malloc[563]: error: Can't allocate region
> > Ugh on length 4
> ...
> > Oddly, darcs still seems to work OK, so far, even with the allocation 
> > failures. But they make me nervous. And 524,288,000 is an insane amount 
> > of even virtual space to try to allocate in one go, IMHO.
> > 
> > Thoughts?
> This is a very ugly trick, which I'll probably have to give up on.  The
> problem it tried to fix was that the ghc garbage collector never decreases
> the size of its heap, which means that if darcs ever grows its ghc heap
> beyond the amount of RAM available, it will be forever swapping until it
> exits, even if the amount of memory actually used goes back down.
> About half of the memory darcs uses isn't allocated by ghc itself, but
> instead is malloced.  If I can keep this memory out of the ghc heap, then
> when memory usage drops back down, it'll can drop back to half of its peak
> value, which is VERY nice if its peak value happened to be about twice the
> physical memory available.
> The trick I use (which is in fpstring.c for the curious) is to allocate a
> very large chunk of memory, then allocate a second large chunk of memory,
> realloc the second large chunk to its desired size, and finally deallocate
> the first large chunk.  The net result is that the allocated memory is far
> from the ghc heap... or at least that seems to be the net result, and it
> works well under linux.  I get the same errors you do on MacOS X, which is
> presumably because MacOS is less optimistic than linux is with respect to
> out of memory conditions.  Linux will just let the allocation succeed and
> hope the memory isn't actually used, which works out fine in this case,
> since darcs doesn't use the memory.

hmm? why would it matter if the memory was allocated near or far from
the ghc heap? nowadays memory is allocated via mmap, so can be placed
anywhere and freed back to the system in non-contiguous chunks.

unless you specifically allow linux to overcommit the memory, you can't
allocate more than can be theoretically satisfied with all the RAM and
swap. This is a relativly recent change to linux, as allowing overcommit
is useful, but makes it hard to decide who to kill on OOM since any random
memory access can cause an OOM. you can change this somewhere in /proc,
but in any case this all seems unnecesarry to begin with.

John Meacham - California Institute of Technology, Alum. - john at foo.net

More information about the darcs-users mailing list