663
votes

I want to do something like this where I can run multiple commands in order.

db:
  image: postgres
web:
  build: .
  command: python manage.py migrate
  command: python manage.py runserver 0.0.0.0:8000
  volumes:
    - .:/code
  ports:
    - "8000:8000"
  links:
    - db
16

16 Answers

1093
votes

Figured it out, use bash -c.

Example:

command: bash -c "python manage.py migrate && python manage.py runserver 0.0.0.0:8000"

Same example in multilines:

command: >
    bash -c "python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000"

Or:

command: bash -c "
    python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000
  "
190
votes

I run pre-startup stuff like migrations in a separate ephemeral container, like so (note, compose file has to be of version '2' type):

db:
  image: postgres
web:
  image: app
  command: python manage.py runserver 0.0.0.0:8000
  volumes:
    - .:/code
  ports:
    - "8000:8000"
  links:
    - db
  depends_on:
    - migration
migration:
  build: .
  image: app
  command: python manage.py migrate
  volumes:
    - .:/code
  links:
    - db
  depends_on:
    - db

This helps things keeping clean and separate. Two things to consider:

  1. You have to ensure the correct startup sequence (using depends_on).

  2. You want to avoid multiple builds which is achieved by tagging it the first time round using build and image; you can refer to image in other containers then.

163
votes

I recommend using sh as opposed to bash because it is more readily available on most Unix based images (alpine, etc).

Here is an example docker-compose.yml:

version: '3'

services:
  app:
    build:
      context: .
    command: >
      sh -c "python manage.py wait_for_db &&
             python manage.py migrate &&
             python manage.py runserver 0.0.0.0:8000"

This will call the following commands in order:

  • python manage.py wait_for_db - wait for the DB to be ready
  • python manage.py migrate - run any migrations
  • python manage.py runserver 0.0.0.0:8000 - start my development server
95
votes

This works for me:

version: '3.1'
services:
  db:
    image: postgres
  web:
    build: .
    command:
      - /bin/bash
      - -c
      - |
        python manage.py migrate
        python manage.py runserver 0.0.0.0:8000

    volumes:
      - .:/code
    ports:
      - "8000:8000"
    links:
      - db

docker-compose tries to dereference variables before running the command, so if you want bash to handle variables you'll need to escape the dollar-signs by doubling them...

    command:
      - /bin/bash
      - -c
      - |
        var=$$(echo 'foo')
        echo $$var # prints foo

...otherwise you'll get an error:

Invalid interpolation format for "command" option in service "web":

23
votes

You can use entrypoint here. entrypoint in docker is executed before the command while command is the default command that should be run when container starts. So most of the applications generally carry setup procedure in entrypoint file and in the last they allow command to run.

make a shell script file may be as docker-entrypoint.sh (name does not matter) with following contents in it.

#!/bin/bash
python manage.py migrate
exec "$@"

in docker-compose.yml file use it with entrypoint: /docker-entrypoint.sh and register command as command: python manage.py runserver 0.0.0.0:8000 P.S : do not forget to copy docker-entrypoint.sh along with your code.

21
votes

Cleanest ?

---
version: "2"
services:
  test:
    image: alpine
    entrypoint: ["/bin/sh","-c"]
    command:
    - |
       echo a
       echo b
       echo c
20
votes

Another idea:

If, as in this case, you build the container just place a startup script in it and run this with command. Or mount the startup script as volume.

13
votes

* UPDATE *

I figured the best way to run some commands is to write a custom Dockerfile that does everything I want before the official CMD is ran from the image.

docker-compose.yaml:

version: '3'

# Can be used as an alternative to VBox/Vagrant
services:

  mongo:
    container_name: mongo
    image: mongo
    build:
      context: .
      dockerfile: deploy/local/Dockerfile.mongo
    ports:
      - "27017:27017"
    volumes:
      - ../.data/mongodb:/data/db

Dockerfile.mongo:

FROM mongo:3.2.12

RUN mkdir -p /fixtures

COPY ./fixtures /fixtures

RUN (mongod --fork --syslog && \
     mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
     mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
     mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
     mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
     mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
     mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
     mongoimport --db wcm-local --collection videos --file /fixtures/videos.json)

This is probably the cleanest way to do it.

* OLD WAY *

I created a shell script with my commands. In this case I wanted to start mongod, and run mongoimport but calling mongod blocks you from running the rest.

docker-compose.yaml:

version: '3'

services:
  mongo:
    container_name: mongo
    image: mongo:3.2.12
    ports:
      - "27017:27017"
    volumes:
      - ./fixtures:/fixtures
      - ./deploy:/deploy
      - ../.data/mongodb:/data/db
    command: sh /deploy/local/start_mongod.sh

start_mongod.sh:

mongod --fork --syslog && \
mongoimport --db wcm-local --collection clients --file /fixtures/clients.json && \
mongoimport --db wcm-local --collection configs --file /fixtures/configs.json && \
mongoimport --db wcm-local --collection content --file /fixtures/content.json && \
mongoimport --db wcm-local --collection licenses --file /fixtures/licenses.json && \
mongoimport --db wcm-local --collection lists --file /fixtures/lists.json && \
mongoimport --db wcm-local --collection properties --file /fixtures/properties.json && \
mongoimport --db wcm-local --collection videos --file /fixtures/videos.json && \
pkill -f mongod && \
sleep 2 && \
mongod

So this forks mongo, does monogimport and then kills the forked mongo which is detached, and starts it up again without detaching. Not sure if there is a way to attach to a forked process but this does work.

NOTE: If you strictly want to load some initial db data this is the way to do it:

mongo_import.sh

#!/bin/bash
# Import from fixtures

# Used in build and docker-compose mongo (different dirs)
DIRECTORY=../deploy/local/mongo_fixtures
if [[ -d "/fixtures" ]]; then
    DIRECTORY=/fixtures
fi
echo ${DIRECTORY}

mongoimport --db wcm-local --collection clients --file ${DIRECTORY}/clients.json && \
mongoimport --db wcm-local --collection configs --file ${DIRECTORY}/configs.json && \
mongoimport --db wcm-local --collection content --file ${DIRECTORY}/content.json && \
mongoimport --db wcm-local --collection licenses --file ${DIRECTORY}/licenses.json && \
mongoimport --db wcm-local --collection lists --file ${DIRECTORY}/lists.json && \
mongoimport --db wcm-local --collection properties --file ${DIRECTORY}/properties.json && \
mongoimport --db wcm-local --collection videos --file ${DIRECTORY}/videos.json

mongo_fixtures/*.json files were created via mongoexport command.

docker-compose.yaml

version: '3'

services:
  mongo:
    container_name: mongo
    image: mongo:3.2.12
    ports:
      - "27017:27017"
    volumes:
      - mongo-data:/data/db:cached
      - ./deploy/local/mongo_fixtures:/fixtures
      - ./deploy/local/mongo_import.sh:/docker-entrypoint-initdb.d/mongo_import.sh


volumes:
  mongo-data:
    driver: local
11
votes

To run multiple commands in the docker-compose file by using bash -c.

command: >
    bash -c "python manage.py makemigrations
    && python manage.py migrate
    && python manage.py runserver 0.0.0.0:8000"

Source: https://intellipaat.com/community/19590/docker-run-multiple-commands-using-docker-compose-at-once?show=19597#a19597

5
votes

If you need to run more than one daemon process, there's a suggestion in the Docker documentation to use Supervisord in an un-detached mode so all the sub-daemons will output to the stdout.

From another SO question, I discovered you can redirect the child processes output to the stdout. That way you can see all the output!

5
votes

Alpine-based images actually seem to have no bash installed, but you can use sh or ash which link to /bin/busybox.

Example docker-compose.yml:

version: "3"
services:

  api:
    restart: unless-stopped
    command: ash -c "flask models init && flask run"
4
votes

There are many great answers in this thread already, however, I found that a combination of a few of them seemed to work best, especially for Debian based users.

services:
  db:
    . . . 
  web:
    . . .
    depends_on:
       - "db"
    command: >      
      bash -c "./wait-for-it.sh db:5432 -- python manage.py makemigrations
      && python manage.py migrate
      && python manage.py runserver 0.0.0.0:8000"

Prerequisites: add wait-for-it.sh to your project directory.

Warning from the docs: "(When using wait-for-it.sh) in production, your database could become unavailable or move hosts at any time ... (This solution is for people that) don’t need this level of resilience."

2
votes

Use a tool such as wait-for-it or dockerize. These are small wrapper scripts which you can include in your application’s image. Or write your own wrapper script to perform a more application-specific commands. according to: https://docs.docker.com/compose/startup-order/

0
votes

I ran into this while trying to get my jenkins container set up to build docker containers as the jenkins user.

I needed to touch the docker.sock file in the Dockerfile as i link it later on in the docker-compose file. Unless i touch'ed it first, it didn't yet exist. This worked for me.

Dockerfile:

USER root
RUN apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; 
echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
RUN groupmod -g 492 docker && \
usermod -aG docker jenkins  && \
touch /var/run/docker.sock && \
chmod 777 /var/run/docker.sock

USER Jenkins

docker-compose.yml:

version: '3.3'
services:
jenkins_pipeline:
    build: .
    ports:
      - "8083:8083"
      - "50083:50080"
    volumes:
        - /root/pipeline/jenkins/mount_point_home:/var/jenkins_home
        - /var/run/docker.sock:/var/run/docker.sock
0
votes

I was having same problem where I wanted to run my react app on port 3000 and storybook on port 6006 both in the same containers.

I tried to start both as entrypoint commands from Dockerfile as well as using docker-compose command option.

After spending time on this, decided to separate these services into separate containers and it worked like charm

-8
votes

try using ";" to separate the commands if you are in verions two e.g.

command: "sleep 20; echo 'a'"