5
votes

I'm putting together a complex pipeline, where I want to include stderr in the program output for recordkeeping purposes but I also want errors to remain on stderr so I can spot problems.

I found this question that asks how to direct stdout+stderr to a file and still get stderr on the terminal; it's close, but I don't want to redirect stdout to a file yet: The program's output will be consumed by other scripts, so I'd like it to remain on stdout (and same for stderr). So, to summarize:

  • Script produces output in fd 1, errors in fd 2.
  • I want the calling program to rearrange things so that output+errors appear in fd 1, errors in fd 2.
  • Also, errors should be interleaved with output (as much as their own buffering allows), not saved and added at the end.

Due-diligence notes: Capturing stderr is easy enough with 2>&1. Saving and viewing stdout is easy enough by piping through tee. I also know how to divert stdout to a file and direct stderr through a pipe: command 2>&1 1>fileA | tee fileB. But how do I duplicate stderr and put stdout back in fd 1?

1

1 Answers

4
votes

As test to generate both stdout and stderr, let's use the following:

{ echo out; echo err >&2; }

The following code demonstrates how both stdout and stderr can be sent to the next step in the pipeline while also sending stderr to the terminal:

$ { echo out; echo err >&2; } 2> >(tee /dev/stderr) | cat >f
err
$ cat f
out
err

How it works

  • 2>

    This redirects stderr to the (pseudo) file which follows.

  • >(tee /dev/stderr)

    This is process substitution and its acts as a pseudo-file that receives input from stderr. Any input it receives is sent to the tee command which sends it both to stderr and to stdout.