[darcs-users] Current Pull Attempting to Overallocate

Aggelos Economopoulos aoiko at cc.ece.ntua.gr
Thu Jan 29 15:08:36 UTC 2004

On Thu, 29 Jan 2004 07:54:38 -0500
David Roundy <droundy at jdj5.mit.edu> wrote:

> On Wed, Jan 28, 2004 at 02:12:01PM -0800, John Meacham wrote:
> > 
> > hmm? why would it matter if the memory was allocated near or far
> > from the ghc heap? nowadays memory is allocated via mmap, so can be
> > placed anywhere and freed back to the system in non-contiguous
> > chunks.

[sorry I broke up the original text like this but you're making so many
statements I wanted to discuss further that, if I just answered below,
my reply would be very hard to follow]

> Well, the ghc allocation involves a contiguous heap, and when it runs
> out of space it increases the ghc heap size.  So I don't want to get
> my memory stuck inside the ghc heap, or the ghc heap will get bigger
> than needed,

OK, I took a peek inside MBlock.c and I think I see what you mean. ghc
allocates memory in 1MB blocks that must be 1MB aligned (this limitation
only seems to exist because of the way HEAP_ALLOCED() and
MARK_HEAP_ALLOCED() are implemented, correct me if I'm wrong. Also, does
anyone know why it doesn't use a bitmap so it could do allocations in
128K blocks?) so when you do a minimal allocation (4K on x86), ghc can't
use the (1M - 4K) that's left. If this is the case, I don't think the
ghc allocation space gets much bigger than needed and of course darcs
can still make use of the remaining space. Or were you refering to
something entirely different?

> which can have very unpleasant results, since it doesn't trigger a gc
> (normally) until it fills up its heap,

Where can I find this logic in the ghc source?

> so if the heap is bigger than your
> physical mem, you've got very serious swapping problems that don't go
> away until darcs exits, regardless of your memory usage.

I don't see how this would lead to very serious swapping problems
regardless of your memory usage. If you've mapped 1G (and touched) but
currently only use 100M, the operating system can swap out some of your
not-so-recently-referenced pages without slowing you (or the system)
down much.

Now, I you want to actually use more memory than the system has you
have *real* problems and no amount of malloc() tricks can get you out
of them (unless of course you're in pathological cases: too much
internal fragmentation, OS mistakenly swaps out your pages etc)

> At least, this is my
> understanding of how it works.  And the ugly tricks *do* give massive
> performance benefits when running darcs check on a large repository
> (i.e. one that is big enough that the peak memory usage involves
> swapping).

Again, if you need more memory than what's available, how can
allocating your memory at a different address help you?


More information about the darcs-users mailing list