[darcs-users] darcs record and huge patches

David Roundy droundy at abridgegame.org
Wed Jan 28 11:51:40 UTC 2004


On Wed, Jan 28, 2004 at 12:40:17AM +0200, Aggelos Economopoulos wrote:
> On Tue, 27 Jan 2004 22:36:58 +0200
> Aggelos Economopoulos <aoiko at cc.ece.ntua.gr> wrote:
> 
> > So, has anyone tried running record on a large tree with many local
> > changes? If it isn't supposed to work I'll just kill the process, but
> > if is, how long should it take?
> 
> Well, it took about five hours, but seems to have worked (produced a
> 27M patch (6M compressed)). Still, it doesn't seem normal that it
> should take that long or consume so much memory - can't you force
> garbage collection at some point? Would disabling use of mmap() help
> in such extreme cases?

You said the CPU usage stayed at 90% pretty much the whole time? If that's
the case, then it seems unlikely that reducing memory usage would speed
much, as it sounds like it wasn't thrashing too much.

What version of darcs are you running? The latest version in the repository
has a few patches that seem to reduce swapping, at least when running darcs
check on large repositories.  The peak memory usage I don't think I'll be
able to significantly improve, so probably records won't be helped much.

I'd also be interested in your cvsps conversion script, as I've done the
same thing myself, and am optimistic that you may have done at least part
of it better than I did.  :) I've been playing with converting the bkcvs
linux kernel repository to darcs (so far it's been I think about four
days).  None of the patches took that long--the biggest is 24M compressed,
but it was all simple additions rather than file modifications.  And I
think most of the time is being spent simply reading the listings of all
the directories and checking the file modification times on all the files.
But that's because most of the changesets only affect a few files.
-- 
David Roundy
http://civet.berkeley.edu/droundy/




More information about the darcs-users mailing list