6
votes

I have a several logical AND commands and in the middle is a perl command that edits a file in-place.

echo before && \
perl -pe 's/foo/bar/g' -i qux && \
echo after

If the file does not exist perl will die with "Can't open qux: No such file or directory." But the error code is 0. That is a problem because I don't want the command to continue. In other words 'after' shouldn't print if the file doesn't exist. Why is that happening and how can I stop it?

2
you could just echo before && test -f qux && ... - dsm
@dsm yeah that seems like a good workaround, thanks - newguy
The interpreter (Perl) is returning to the shell, not the (one-line) program that it executed. There is no reason for it to return anything other than success (0). If you were to invoke a script instead of a one-liner you could have it exit 1. - zdim
Another point -- the loop over the input is setup as part of -p so the absence of a file is an uncaught exception and you have no control over what happens. You could drop that, like perl -e '-f qux || exit 1; ...' and then it will work as expected. But then of course you'd have to set up while(<>) { ... } yourself and things become unwieldy. - zdim
@zdim I am curious is there a reason I can't do $SIG{__DIE__} = sub {exit 1}; I just tried setting up a while(<>) and the die signal handler isn't called - newguy

2 Answers

4
votes

You might use perl -pe ... < file instead of perl -pe ... file. In this case the shell will already complain if the file does not exist without even invoking perl. But this can not be used together with the -i switch.

Or since you want to use the -i switch you might check that the file is writable before using it. Not that this checks for qux being both file and writable because being only one of these is not enough (maybe one needs to check for readable too).

[ -f qux ] && [ -w qux ] && perl -pe ... -i qux
2
votes

OK, so...

As mentioned in comments to the question, the reason perl exits with a 0 is because perl itself was successful, even if the code executed by perl was not. In order to get it to exit with a failure, then, we need to find a way to make it die (or explicitly exit with a non-zero status) if the file can't be opened.

But perl -p wraps a loop around your one-liner, so can we do that without having to repeat the test on every line? Yes, we can! perldoc perlrun tells us that BEGIN and END blocks allow us to take control before or after the loop executes.

But we still need a way to tell whether the file was successfully opened. To do that, we need to know the name of the filehandle used by <>, which perldoc perlvar tells us is called ARGV.

But ARGV is only (reliably) magical when it's read from, so we can't just test for whether it's defined because (in my quick testing) it's always defined, even if the file is missing. So we have to read from it. And then the first line is lost. But that's easily fixed by seeking back to the start of the file.

TL;DR and the upshot of all that:

echo before && \
perl -pe 'BEGIN { exit 1 unless <> ; seek ARGV,0,0 } s/foo/bar/g' -i qux && \
echo after

Note that this still isn't 100% perfect. It will also die if the file exists, but is empty. Checking defined <> doesn't fix this flaw and attempting to <> ; die unless ARGV causes <> to read from STDIN if the file doesn't exist. If anyone has a way to get it to correctly detect empty files, I'd be very interested to hear about it!