[darcs-devel] announcing darcs 2.0.0pre1, the first prerelease for darcs 2

Dmitry Kurochkin dmitry.kurochkin at gmail.com
Wed Dec 19 15:51:17 UTC 2007

2007/12/19, David Roundy <daveroundy at gmail.com>:
> On Dec 19, 2007 9:53 AM, Dmitry Kurochkin <dmitry.kurochkin at gmail.com> wrote:
> > I have created a Libwww.hs module and hslibwww.c. Libwww.hs provides
> > getUrl and getUrls functions. I have changed copyRemotesNormal to use
> > getUrls. And it is ready for testing. But I get compilation errors on
> > hslibwww.c.
> > It is compiled with GHC, not GCC. If I run GCC by hand it works fine. Errors
> > say that there are redefined symbols in libwww header files, like:
> >
> > In file included from /usr/include/w3c-libwww/WWWLib.h:50,
> >
> >                  from src/hslibwww.c:5:0:
> >
> > /usr/include/w3c-libwww/wwwsys.h:1099:1:
> >      warning: "strchr" redefined
> >
> > Any ideas? Why is GHC used for C sources?
> I think ghc is used for C sources just to keep the build/configure
> process simple.  ghc knows how to compile C files, and this way we
> only need to identify one compiler... although the configure script
> does actually identify a C compiler also.
> Hmmm.  I'm not sure this should be an error, it says it's only a warning...
Copied too fast. Here is what I get:

% ghc -c hslibwww.c `libwww-config --cflags`
In file included from /usr/include/w3c-libwww/WWWLib.h:50,

                 from hslibwww.c:5:0:

     error: conflicting types for 'sys_errlist'

     error: previous declaration of 'sys_errlist' was here
In file included from /usr/include/w3c-libwww/HTNet.h:61,
                 from /usr/include/w3c-libwww/HTReq.h:75,
                 from /usr/include/w3c-libwww/WWWCore.h:73,
                 from /usr/include/w3c-libwww/WWWLib.h:75,

                 from hslibwww.c:5:0:

     warning: 'struct hostent' declared inside parameter list

     warning: its scope is only this definition or declaration, which
is probably not what you want
In file included from /usr/include/w3c-libwww/HTInet.h:30,
                 from /usr/include/w3c-libwww/WWWCore.h:339,
                 from /usr/include/w3c-libwww/WWWLib.h:75,

                 from hslibwww.c:5:0:

     error: field 'sock_addr' has incomplete type
In file included from /usr/include/w3c-libwww/WWWFile.h:77,
                 from /usr/include/w3c-libwww/HTInit.h:66,
                 from /usr/include/w3c-libwww/WWWInit.h:51,

                 from hslibwww.c:6:0:

     warning: 'struct stat' declared inside parameter list

If I change ghc to gcc it compiles fine without a warning.
I will try using C compiler for C source files instead of ghc.

> > > So a much nicer feature would be something that can work with lazy
> > > downloading, and somehow add URLs to the pipelined queue as we go.  I
> > > don't know if libwww will work with this approach, but I suspect it'd
> > > require the least reworking of darcs' code.
> > If I understand correctly the only way to implement this is a background thread.
> > Or some kind of event loop inside darcs...
> > Multithreading with FFI is a tricky point.
> >
> > What is the problem with getting all filenames before starting a download?
> Right.  That, and we'd like to be able to start using files before
> they're all downloaded, which means we'd require something a little
> asynchronous.  For example, when doing darcs changes -s, it'd be nice
> to be able to start displaying patches before they're all downloaded
> (as is currently the case).
> > Do not we know in advance what patches we need?
> That's right.  There are a couple of complexities.  One is that we
> currently support "lazy" downloading of files, where we only download
> those that we need.  The second is that the caching system means that
> for hashed repositories we don't always download all the files that we
> need, since some of them may already be available locally.  The latter
> issue isn't too hard, as we could just filter the files, but lazy
> downloading is really nice in terms of keeping the code simple and
> avoiding needless downloads, so I'd like to keep it.
> > If which patches we need to download depends somehow on content of patch
> > we have just downloaded we can provide a callback for darcs. When
> > libwww completes
> > another transfer it calls a callback and darcs based on content of this patch
> > adds new downloads to event queue.
> Hmmm.  This does sound like it could be used to implement an
> asynchronous download queue.
> > The question here is, can we call haskell functions from C? I have no
> > experience here...
> Yes, we can call haskell from C, but it's a bit scary.  Maybe you'd
> want to write a C callback and have the Haskell code stick stuff into
> a global queue for the C code to access?
> > > But definitely rewriting copyRemotes is a good starting point.
> > > Ideally we wouldn't remove the libcurl code, but would enable
> > > configure checks to use libwww if we can, otherwise use libcurl if
> > > it's present, and finally fall back on wget, etc if no librarys are
> > > present.
> >
> > Yes, this should be the first step. I hope to resolve compilation
> > issues soon. After
> > that some more changes are needed (like printing progress).
> > I am not familiar with configure stuff, so help is welcome here.
> I can definitely handle the configure stuff, so if you send in a
> working patch without configure support (that just assumes libwww is
> available), along with a note to this effect, I'll see about adding
> the configure bits.  Thanks!
> David

More information about the darcs-devel mailing list