OK, so...
As mentioned in comments to the question, the reason perl exits with a 0 is because perl itself was successful, even if the code executed by perl was not. In order to get it to exit with a failure, then, we need to find a way to make it die (or explicitly exit with a non-zero status) if the file can't be opened.
But perl -p wraps a loop around your one-liner, so can we do that without having to repeat the test on every line? Yes, we can! perldoc perlrun tells us that BEGIN and END blocks allow us to take control before or after the loop executes.
But we still need a way to tell whether the file was successfully opened. To do that, we need to know the name of the filehandle used by <>, which perldoc perlvar tells us is called ARGV.
But ARGV is only (reliably) magical when it's read from, so we can't just test for whether it's defined because (in my quick testing) it's always defined, even if the file is missing. So we have to read from it. And then the first line is lost. But that's easily fixed by seeking back to the start of the file.
TL;DR and the upshot of all that:
echo before && \
perl -pe 'BEGIN { exit 1 unless <> ; seek ARGV,0,0 } s/foo/bar/g' -i qux && \
echo after
Note that this still isn't 100% perfect. It will also die if the file exists, but is empty. Checking defined <> doesn't fix this flaw and attempting to <> ; die unless ARGV causes <> to read from STDIN if the file doesn't exist. If anyone has a way to get it to correctly detect empty files, I'd be very interested to hear about it!
echo before && test -f qux && ...- dsmexit 1. - zdim-pso the absence of a file is an uncaught exception and you have no control over what happens. You could drop that, likeperl -e '-f qux || exit 1; ...'and then it will work as expected. But then of course you'd have to set upwhile(<>) { ... }yourself and things become unwieldy. - zdim$SIG{__DIE__} = sub {exit 1};I just tried setting up a while(<>) and the die signal handler isn't called - newguy