[darcs-devel] announcing darcs 2.0.0pre1, the first prerelease for darcs 2

David Roundy daveroundy at gmail.com
Wed Dec 19 14:26:29 UTC 2007


On Dec 18, 2007 10:00 PM, Dmitry Kurochkin <dmitry.kurochkin at gmail.com> wrote:
> I tried to get pipelining working with cURL, but no luck. It looks to me
> that cURL multi API is overcomplicated and not-too-well documented...
>
> So I have taken a look at libwww and it works great! I created (copied
> a sample) simple program to load given URL many times using only
> persistent connection and using pipelining. And results are much better
> than I expected: loading 1000 times http://nnov.ru takes 1:29,49 with persistent
> connection and only 22,946s with pipelining!
>
> I will experiment with replacing current cURL implementation with libwww.
> I am not familiar with Darcs sources, advices are welcome. After a quick
> look at External.hs it looks to me like we can provide a function which takes
> a list of URLs instead of one, and fetches them using pipelining. I think
> this will require minimum changes to Darcs sources. Am I correct with this?

Actually, there's already a copyRemotes function (and related
functions) that grabs multiple files at a time from a list of URLs.
Replacing this function with one that uses pipelining will gain us
something when using old repositories.  The catch is that we rarely
call this function (only for get, I believe), and never call it when
using the new hashed format.  The trouble is that it requires that we
know in advance which files we will need, which doesn't interact well
with programmer-friendly lazy downloading.

So a much nicer feature would be something that can work with lazy
downloading, and somehow add URLs to the pipelined queue as we go.  I
don't know if libwww will work with this approach, but I suspect it'd
require the least reworking of darcs' code.

But definitely rewriting copyRemotes is a good starting point.
Ideally we wouldn't remove the libcurl code, but would enable
configure checks to use libwww if we can, otherwise use libcurl if
it's present, and finally fall back on wget, etc if no librarys are
present.

David


More information about the darcs-devel mailing list