[darcs-users] Problems with large repos
Jason M. Felice
jfelice at cronosys.com
Thu Dec 15 14:54:18 UTC 2005
Quoting "Jason M. Felice" <jfelice at cronosys.com>:
> Quoting Juliusz Chroboczek <Juliusz.Chroboczek at pps.jussieu.fr>:
>
>> How large is patch d? You should have at least twice as much physical
>> memory; if you don't, you should split it into smaller patches, for
>> example one patch per subdirectory.
>
> This gives me a couple ideas to work with. I'll report back. But
> also makes me wonder since I'm having a similar-looking problem on a
> different repo on a machine with 2gig of memory and a 51M
> repository... but my poor brain can only handle one problem at a time.
>
Well, the original repo is 57M total (reported by du), the new
(4.0.0beta) repo is 71M, I started a pull of one simple patch that
should be only a couple K last night on a machine with 2G of physical
memory, and darcs is currently using 1.8% of memory and has uses 282
minutes of processor time. So I suspect memory is not the issue.
I reran with the profiler (Ctrl+C after a couple minutes):
Thu Dec 15 09:48 2005 Time and Allocation Profiling Report (Final)
darcs +RTS -p -RTS pull ../sugarsuite
total time = 185.32 secs (9266 ticks @ 20 ms)
total alloc = 26,824,630,364 bytes (excludes profiling overheads)
COST CENTRE MODULE %time %alloc
commute PatchCommute 22.3 37.4
clever_commute PatchCommute 20.2 32.9
eq_patches_base PatchCommute 10.5 0.0
invert PatchCore 9.9 17.4
is_in_directory PatchCommute 6.8 0.0
commute_filedir PatchCommute 4.3 2.5
commute_recursive_merger PatchCommute 3.2 2.6
merger_commute PatchCommute 3.2 0.2
commute_nameconflict PatchCommute 3.1 0.0
everything_else_commute PatchCommute 3.0 0.5
simple_unforce PatchCommute 2.2 0.0
repeated_unforce PatchCommute 2.1 0.0
speedy_commute PatchCommute 2.0 0.0
is_filepatch_merger PatchCommute 1.7 0.4
toMaybe PatchCommute 1.1 1.9
fn2fp FileName 1.1 0.0
conflicted_name PatchCommute 0.6 1.8
>>
>> Let us know if that fixes your problem; if not, you'll need to do a
>> manual merge, or wait a few months for a faster version of Darcs.
I would enjoy supporting your efforts here. My old, sleeping Standard
ML braincells seem to be sluggishly awaking, allowing me to /interpret/
Haskell source, so I could contribute unit tests or something, if there
is a large pending refactor.
--
Jason M. Felice
Cronosys, LLC <http://www.cronosys.com>
216-221-4600 x302
More information about the darcs-users
mailing list