296
votes

I'm trying to run MULTIPLE commands like this.

docker run image cd /path/to/somewhere && python a.py

But this gives me "No such file or directory" error because it is interpreted as...

"docker run image cd /path/to/somewhere" && "python a.py"

It seems that some ESCAPE characters like "" or () are needed.

So I also tried

docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)

but these didn't work.

I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.

8
Note that for this particular use, docker run has a -w/--workdir argument. docker run -w /path/to/somewhere image python a.pypullmyteeth

8 Answers

508
votes

To run multiple commands in docker, use /bin/bash -c and semicolon ;

docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"

In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;

docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
33
votes

You can do this a couple of ways:

  1. Use the -w option to change the working directory:

    -w, --workdir="" Working directory inside the container

    https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w

  2. Pass the entire argument to /bin/bash:

    docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
    
11
votes

You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:

docker run img /bin/bash -c "ls -1 | wc -l"

But, without invoking the shell in the remote the output will be redirected to the local terminal.

6
votes

bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.

I successfully got around this by piping my commands into the process from the outside, i.e.

cat script.sh | docker run -i <image> /bin/bash

3
votes

If you want to store the result in one file outside the container, in your local machine, you can do something like this.

RES_FILE=$(readlink -f /tmp/result.txt)

docker run --rm -v ${RES_FILE}:/result.txt img bash -c "cat /etc/passwd | grep root > /result.txt"

The result of your commands will be available in /tmp/result.txt in your local machine.

2
votes

For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.

So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"

2
votes

Just to make a proper answer from the @Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.

The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.

Based on the OP's question:

docker run image sh -c "cd /path/to/somewhere && python a.py"

0
votes

In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.

In your Dockerfile, replace

CMD [ 'python', 'a.py' ]

or whatever with

CMD [ '/wrapper' ]

and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like

#!/bin/sh
set -e
cd /path/to/somewhere
python a.py

In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.