Improved documentation.

Removed the Cookbook link for now, since it's empty.
This commit is contained in:
Avery Pennarun 2018-11-20 09:50:37 -05:00
commit 1b616ddcbb
6 changed files with 185 additions and 114 deletions

View file

@ -1,49 +1,35 @@
# redo: a recursive, general-purpose build system
`redo` is a competitor to the long-lived, but sadly imperfect, `make`
program. There are many such competitors, because many people over the
years have been dissatisfied with make's limitations. However, of all the
replacements I've seen, only redo captures the essential simplicity and
flexibility of make, while avoiding its flaws. To my great surprise, it
manages to do this while being simultaneously simpler than make, more
flexible than make, and more powerful than make.
program. Unlike other such competitors, redo captures the essential
simplicity and flexibility of make, while avoiding its flaws. It manages to
do this while being simultaneously simpler than make, more flexible than
make, and more powerful than make, and without sacrificing performance - a
rare combination of features.
Although I wrote redo and I would love to take credit for it, the magical
simplicity and flexibility comes because I copied verbatim a design by
Daniel J. Bernstein (creator of qmail and djbdns, among many other useful
things). He posted some very terse notes on his web site at one point
(there is no date) with the unassuming title, "[Rebuilding target files when
source files have changed](http://cr.yp.to/redo.html)." Those notes are
enough information to understand how the system is supposed to work;
unfortunately there's no code to go with it. I get the impression that the
hypothetical "djb redo" is incomplete and Bernstein doesn't yet consider it
ready for the real world.
I was led to that particular page by random chance from a link on [The djb
way](http://thedjbway.b0llix.net/future.html), by Wayne Marshall.
The original design for redo comes from Daniel J. Bernstein (creator of
qmail and djbdns, among many other useful things). He posted some
terse notes on his web site at one point (there is no date) with the
unassuming title, "[Rebuilding target files when source files have
changed](http://cr.yp.to/redo.html)." Those notes are enough information to
understand how the system is supposed to work; unfortunately there's no code
to go with it. I wrote this implementation of redo from scratch, based on
that design.
After I found out about djb redo, I searched the Internet for any sign that
other people had discovered what I had: a hidden, unimplemented gem of
brilliant code design. I found only one interesting link: Alan Grosskurth,
whose [Master's thesis at the University of Waterloo](http://grosskurth.ca/papers/mmath-thesis.pdf)
was about top-down software rebuilding, that is, djb redo. He wrote his
own (admittedly slow) implementation in about 250 lines of shell script.
If you've ever thought about rewriting GNU make from scratch, the idea of
doing it in 250 lines of shell script probably didn't occur to you. redo is
so simple that it's actually possible. For testing, I actually wrote an
even more minimal version, which always rebuilds everything instead of
checking dependencies, in 210 lines of shell (about 4 kbytes).
The design is simply that good.
own (admittedly slow) implementation in about 250 lines of shell script,
which gives an idea for how straightforward the system is.
My implementation of redo is called `redo` for the same reason that there
are 75 different versions of `make` that are all called `make`. It's somehow
easier that way. Hopefully it will turn out to be compatible with the other
implementations, should there be any.
easier that way.
My extremely minimal implementation, called `do`, is in the `minimal/`
directory of this repository.
I also provide an extremely minimal pure-POSIX-sh implementation, called
`do`, in the `minimal/` directory of this repository.
(Want to discuss redo? See the bottom of this file for
information about our mailing list.)
@ -51,18 +37,19 @@ information about our mailing list.)
# What's so special about redo?
The theory behind redo is almost magical: it can do everything `make` can
do, only the implementation is vastly simpler, the syntax is cleaner, and you
can do even more flexible things without resorting to ugly hacks. Also, you
get all the speed of non-recursive `make` (only check dependencies once per
run) combined with all the cleanliness of recursive `make` (you don't have
code from one module stomping on code from another module).
The theory behind redo sounds too good to be true: it can do everything
`make` can do, but the implementation is vastly simpler, the syntax is
cleaner, and you have even more flexibility without resorting to ugly hacks.
Also, you get all the speed of non-recursive `make` (only check dependencies
once per run) combined with all the cleanliness of recursive `make` (you
don't have code from one module stomping on code from another module).
(Disclaimer: my current implementation is not as fast as `make` for some
things, because it's written in python. Eventually I'll rewrite it an C and
it'll be very, very fast.)
The easiest way to show it is with an example.
The easiest way to show it is to jump into an example. Here's one for
compiling a C++ program.
Create a file called default.o.do:
@ -128,34 +115,43 @@ run the *current script* over again."
Dependencies are tracked in a persistent `.redo` database so that redo can
check them later. If a file needs to be rebuilt, it re-executes the
`whatever.do` script and regenerates the dependencies. If a file doesn't
need to be rebuilt, redo can calculate that just using its persistent
need to be rebuilt, redo figures that out just using its persistent
`.redo` database, without re-running the script. And it can do that check
just once right at the start of your project build.
just once right at the start of your project build, which is really fast.
But best of all, as you can see in `default.o.do`, you can declare a
dependency *after* building the program. In C, you get your best dependency
Best of all, as you can see in `default.o.do`, you can declare a dependency
*after* building the program. In C, you get your best dependency
information by trying to actually build, since that's how you find out which
headers you need. redo is based on the following simple insight:
you don't actually
care what the dependencies are *before* you build the target; if the target
doesn't exist, you obviously need to build it. Then, the build script
itself can provide the dependency information however it wants; unlike in
`make`, you don't need a special dependency syntax at all. You can even
declare some of your dependencies after building, which makes C-style
autodependencies much simpler.
headers you need. redo is based on this simple insight: you don't
actually care what the dependencies are *before* you build the target. If
the target doesn't exist, you obviously need to build it.
Once you're building it anyway, the build script itself can calculate the
dependency information however it wants; unlike in `make`, you don't need a
special dependency syntax at all. You can even declare some of your
dependencies after building, which makes C-style autodependencies much
simpler.
redo therefore is a unique combination of imperative and declarative
programming. The initial build is almost entirely imperative (running a
series of scripts). As part of that, the scripts declare dependencies a few
at a time, and redo assembles those into a larger data structure. Then, in
the future, it uses that pre-declared data structure to decide what work
needs to be redone.
(GNU make supports putting some of your dependencies in include files, and
auto-reloading those include files if they change. But this is very
confusing - the program flow through a Makefile is hard to trace already,
and even harder if it restarts randomly from the beginning when a file
changes. With redo, you can just read the script from top to bottom. A
`redo-ifchange` call is like calling a function, which you can also read
from top to bottom.)
and even harder when it restarts from the beginning because an include file
changes at runtime. With redo, you can just read each build script from top
to bottom. A `redo-ifchange` call is like calling a function, which you can
also read from top to bottom.)
# What projects use redo?
Here are a few open source examples:
Some larger proprietary projects are using it, but unfortunately they can't
easily be linked from this document. Here are a few open source examples:
* [Liberation Circuit](https://github.com/linleyh/liberation-circuit) is a
straightforward example of a C++ binary (a game) compiled with redo.
@ -177,11 +173,12 @@ Here are a few open source examples:
[`t/111-example/`](t/111-example) subdir of the redo project itself.
If you switch your program's build process to use redo, please let us know and
we can link to it here.
we can link to it here for some free publicity.
(Please don't use the code in the `t/` directory as serious examples of how
to use redo. Many of the tests are doing things in deliberately psychotic
ways in order to stress redo's code and find bugs.)
(Please don't use the integration testing code in the redo project's `t/`
directory as serious examples of how to use redo. Many of the tests are
doing things in intentionally psychotic ways in order to stress redo's code
and find bugs.)
# How does this redo compare to other redo implementations?
@ -190,11 +187,13 @@ djb never released his version, so other people have implemented their own
variants based on his [published specification](http://cr.yp.to/redo.html).
This version, sometimes called apenwarr/redo, is probably the most advanced
one, including support for parallel builds, advanced build logs, and helpful
debugging features. It's currently written in python for easier
one, including support for parallel builds,
[resilient timestamps](https://apenwarr.ca/log/20181113) and checksums,
[build log linearization](https://apenwarr.ca/log/20181106), and
helpful debugging features. It's currently written in python for easier
experimentation, but the plan is to eventually migrate it to plain C. (Some
people like to call this version "python-redo", but I don't like that name.
We shouldn't have to rename it just because we port the code to C.)
We shouldn't have to rename it when we later transliterate the code to C.)
Here are some other redo variants (thanks to Nils Dagsson Moskopp for many
of these links):
@ -241,4 +240,4 @@ redo semantics, and/or have few or no automated tests.
At the time of this writing, none of them except apenwarr/redo (ie. this
project) support parallel builds (`redo -j`). For large projects,
parallel builds are usually essential.
parallel builds are usually considered essential.