# [darcs-users] Separating the user manual and code

Trent W. Buck trentbuck at gmail.com
Sun Oct 26 01:50:19 UTC 2008

"Max Battcher" <me at worldmaker.net> writes:

> In a Python project I would do this by pulling parts of the Python files
> (the "documentation strings") and embedding them into the final output at
> documentation build-time

That is essentially what src/preproc.hs does.

I was *very* confused when I looked at the .lhs files, because the use
what look like LaTeX directives, but until I read preproc.hs over and
over I didn't realize that things like \input{foo.lhs} were actually
being handled by preproc and LaTeX never saw them.  Similarly it wasn't
clear why of how \begin{code} blocks mysteriously vanished somewhere
between the .lhs and the .tex.  The certainly weren't any LaTeX
declarations to turn \code{block} into a no-op, which is how I would
have expected it done.

> through a documentation environment (Sphinx) extension, which may have
> no idea of how to "build" the full file but can scrape what it needs.
> With the goal of having a simpler, faster build path for just
> documentation I think it might be useful if the documentation could be
> built with no GHC at all, or, perhaps more likely, just the bare
> minimum of a GHC environment.

Certainly it's a problem that right now, "make pdf" starts by compiling
almost all the .o files.  I don't think it's a big problem in itself to
require GHC to build the documentation.

> (The Sphinx-based approach to preproc/commands might be simple source files
>
>   ===========
>
>
>   .. shellexample::
>
>      darcs init
>      touch a
>      # ...grep for a in darcs whatsnew...

For now, I would like to transition to

1) Most documentation (particularly what is now src/*.tex) manually
translated to doc/manual/*.txt, in reST format.

2) Rather than calling preproc once on darcs.lhs and having it crawl the
text by recognizing \input, call it once for each src/Command/*.lhs
file:

doc/manual/command-%.txt: src/Darcs/Commands/%.lhs src/preproc
src/preproc $< >$@

Right now I think that preproc is too confusing for technical writers
could be addressed in another way -- by documenting the transforms
preproc performs to its input files.

Actually, regarding (2) I wondered if we could just make the output of
"darcs foo --help" the documentation for that command, and move the
parts that --help doesn't emit into doc/manual/.

doc/manual/command-%.txt: src/darcs
$<$* --help >\$@

>> That said, I'm not dead-set on keeping the current literate
>> framework.  A tantalizing alternative would be to move to having
>> literate tests rather than literate code.

The Python community (from which reST originates) has a utility called
"doctest", which can look through reST source files (or docstrings in
Python source) for examples, differentiate between input and output
within the example, then send that input into the REPL, and report if
the output matches the expected output.

I think we can make this handle sh examples with minimal effort.

http://en.wikipedia.org/wiki/Doctest

> This is an interesting idea: more examples/use-cases directly in the
> documentation that double as regression/smoke tests.  I would even argue
> that good examples might even be interesting to some users and so maybe the
> idea might not be to remove them entirely from the final output, but perhaps
> to "hide" it (at least in HTML output), place them in an "Example" box and
> then through a tiny bit of JavaScript allow it so that an interested user
> could expand the example and maybe even try it...

I'm against tweaking the upstream build tools (except where we
absolutely have to), because then someone has to document and maintain
those tweaks.  It also means that the source document no longer behaves
"as advertised" -- you can't just put someone familiar with LaTeX in
front of the file anymore, because \input{} and \begin{code} have
different semantics.

>> I'm curious as to the advantages of reST/sphinx over say markdown.

Markdown has a few elements that are sufficient for blog comments, but
aren't really sufficient for writing a user manual.  For example, in the
implementations of markdown I've used, the only way to get a table is to
write raw HTML.  By comparison, reST can accept both simple over-
and-underline tables, as well as those generated by M-x table-insert,
including column and row spanning, and including arbitrary document
elements (even other tables) in the cells.

As a consequence of the limited syntax, markdown also means that PDF
output will look terrible.

reST also has a --strict option, so that syntax boo-boos result in a
compile-time error rather than the resulting documentation merely being
wrong.

Internally docutils (the canonical reST implementation) separates
parsing, munging and writing, so it is easy to add new input our output
formats.  It can also emit the internal parse tree in XML form, meaning
that it can be munged using something like XSLT if (like me) Python

>> markdown is more widely-used, and is supported by many
>> implementations

Right; it's so primitive that even PHP hackers can implement it in a few
hundred lines.

>> while I wasn't able to find anywhere a hint as to how one would
>> compile a reST file--probably this is because they are lacking good
>> technical writers for their manual--except by using pandoc (which is
>> the tool that caused me to hear about the markdown format).

My impression is that pandoc is made by markdown users for markdown
users, and what reST support is currently implemented is an
afterthought.  I haven't looked at pandoc for about six months, though,
so it may have improved since then WRT reST.

> Markdown is wider used, but reST was designed to be more amenable to
> extension and was also designed to be more useful to technical
> documentation (markup for option lists, for example).  As far as I
> know the only full parser of reST is the "docutils" parser written in
> Python

That's at http://docutils.sf.net/.  I think I mentioned that in one of
my initial posts.