Previously, we'd try to put the stdout temp file in the same dir as the
target, if that dir exists. Otherwise we'd walk up the directory tree
looking for a good place. But this would go wrong if the directory we
chose got *deleted* during the run of the .do file.
Instead, we switch to an entirely new design: we use mkstemp() to
generate a temp file in the standard temp file location (probably
/tmp), then open it and immediately delete it, so the .do file can't
cause any unexpected behaviour. After the .do file exits, we use our
still-open fd to the stdout file to read the content back out.
In the old implementation, we also put the $3 in the "adjusted"
location that depended whether the target dir already existed, just for
consistency. But that was never necessary: we didn't create the $3
file, and if the .do script wants to write to $3, it should create the
target dir first anyway. So change it to *always* use a $3 temp
filename in the target dir, which is much simpler and so has fewer edge
cases.
Add t/202-del/deltest4 with some tests for all these edge cases.
Reported-by: Jeff Stearns <jeff.stearns@gmail.com>
Basically all just missing quotes around shell strings that use $PWD.
Most paths inside a project, since redo uses relative paths, only need
to worry when project-internal directories or filenames have spaces in
them.
Reported-by: Jeff Stearns <jeff.stearns@gmail.com>
Adding redo-whichdo wasn't so hard, except that it exposed a bunch of
mistakes in the way minimal/do was choosing .do files. Most of the
wrong names it tried didn't matter (since they're very unlikely to
exist) except that they produce the wrong redo-whichdo output.
Debugging that took so much effort that I added unit tests for some
functions to make it easier to find problems in the future.
Note: this change apparently makes minimal/do about 50% slower than
before, because the _find_dofiles algorithm got less optimized (but
more readable). It's still not very slow, and anyway, since it's
minimal/do, I figured readable/correct code probably outweighed a bit
of speed. (Although if anyone can come up with an algorithm that's
clear *and* works better, I won't turn it down :) I feel like I must
be doing it the hard way...)