[darcs-devel] [issue647] automated benchmarking and comparison

Jason Dagit dagit at codersbase.com
Wed Feb 6 17:29:08 UTC 2008


On Feb 6, 2008 9:24 AM, Eric Kow <bugs at darcs.net> wrote:
>
> New submission from Eric Kow <eric.kow at gmail.com>:
>
> It would be really useful to have a quick and dirty benchmarking script that we
> can either distribute with or alongside darcs
>
> The script would just encode some current darcs benchmarking practices: namely,
> getting a repository, obliterating a 1000 patches and pulling them back.  Having
> it do the best of N trials is probably a good idea too.  Automatically
> summarising the results (% improvement) would be nice too.
>
> Perhaps a good way to do it is to have it be parameterisable with two
> directories (with darcs built in both) and a repository.  To avoid people having
> to choose a repository, we could also distribute the GHC one as the prototypical
> large-ish repository and hopefully set our sights higher one day.
>
> Also, I think it's more important for us to have a basic version of this script
> now, than a fancy one later :-)

I started making a fancy one as a class project (it's in scala) and
works okay, it just needs more usage patterns/benchmarks implemented.
It's actually quite simple and I spend most of the time on the project
just learning scala, so a Haskell re-implementation could probably be
made easily.  BTW, I see this as an argument for having a libdarcs.
It would be easier to create one-off versions of darcs for
benchmarking using a libdarcs than it is to create wrappers around
darcs.

Here is the link if you want to tear it apart or get ideas:
http://www.codersbase.com/index.php/DarcsSim

Probably the most useful thing you'll get out of it (if you don't use
it) is the way I use ghc's run-time to extract metrics.

Jason


More information about the darcs-devel mailing list