160
votes

I'm trying to wrap my head around Docker from the point of deploying an application which is intended to run on the users on desktop. My application is simply a flask web application and mongo database. Normally I would install both in a VM and, forward a host port to the guest web app. I'd like to give Docker a try but I'm not sure how I'm meant to use more than one program. The documentations says there can only be only ENTRYPOINT so how can I have Mongo and my flask application. Or do they need to be in separate containers, in which case how do they talk to each other and how does this make distributing the app easy?

8
Spot on: makes me wonder why docker were so popular .. (single process ..?) - but let's see what the answers tell us..WestCoastProjects

8 Answers

126
votes

There can be only one ENTRYPOINT, but that target is usually a script that launches as many programs that are needed. You can additionally use for example Supervisord or similar to take care of launching multiple services inside single container. This is an example of a docker container running mysql, apache and wordpress within a single container.

Say, You have one database that is used by a single web application. Then it is probably easier to run both in a single container.

If You have a shared database that is used by more than one application, then it would be better to run the database in its own container and the applications each in their own containers.

There are at least two possibilities how the applications can communicate with each other when they are running in different containers:

  1. Use exposed IP ports and connect via them.
  2. Recent docker versions support linking.
21
votes

I had similar requirement of running a LAMP stack, Mongo DB and my own services

Docker is OS based virtualisation, which is why it isolates its container around a running process, hence it requires least one process running in FOREGROUND.

So you provide your own startup script as the entry point, thus your startup script becomes an extended Docker image script, in which you can stack any number of the services as far as AT LEAST ONE FOREGROUND SERVICE IS STARTED, WHICH TOO TOWARDS THE END

So my Docker image file has two line below in the very end:

COPY myStartupScript.sh /usr/local/myscripts/myStartupScript.sh
CMD ["/bin/bash", "/usr/local/myscripts/myStartupScript.sh"]

In my script I run all MySQL, MongoDB, Tomcat etc. In the end I run my Apache as a foreground thread.

source /etc/apache2/envvars
/usr/sbin/apache2 -DFOREGROUND

This enables me to start all my services and keep the container alive with the last service started being in the foreground

Hope it helps

UPDATE: Since I last answered this question, new things have come up like Docker compose, which can help you run each service on its own container, yet bind all of them together as dependencies among those services, try knowing more about docker-compose and use it, it is more elegant way unless your need does not match with it.

17
votes

I strongly disagree with some previous solutions that recommended to run both services in the same container. It's clearly stated in the documentation that it's not a recommended:

It is generally recommended that you separate areas of concern by using one service per container. That service may fork into multiple processes (for example, Apache web server starts multiple worker processes). It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application. You can connect multiple containers using user-defined networks and shared volumes.

There are good use cases for supervisord or similar programs but running a web application + database is not part of them.

You should definitely use docker-compose to do that and orchestrate multiple containers with different responsibilities.

7
votes

You can run 2 processes in foreground by using wait. Just make a bash script with the following content. Eg start.sh:

# runs 2 commands simultaneously:

mongod & # your first application
P1=$!
python script.py & # your second application
P2=$!
wait $P1 $P2

In your Dockerfile, start it with

CMD bash start.sh
6
votes

They can be in separate containers, and indeed, if the application was also intended to run in a larger environment, they probably would be.

A multi-container system would require some more orchestration to be able to bring up all the required dependencies, though in Docker v0.6.5+, there is a new facility to help with that built into Docker itself - Linking. With a multi-machine solution, its still something that has to be arranged from outside the Docker environment however.

With two different containers, the two parts still communicate over TCP/IP, but unless the ports have been locked down specifically (not recommended, as you'd be unable to run more than one copy), you would have to pass the new port that the database has been exposed as to the application, so that it could communicate with Mongo. This is again, something that Linking can help with.

For a simpler, small installation, where all the dependencies are going in the same container, having both the database and Python runtime started by the program that is initially called as the ENTRYPOINT is also possible. This can be as simple as a shell script, or some other process controller - Supervisord is quite popular, and a number of examples exist in the public Dockerfiles.

4
votes

Docker provides a couple of examples on how to do it. The lightweight option is to:

Put all of your commands in a wrapper script, complete with testing and debugging information. Run the wrapper script as your CMD. This is a very naive example. First, the wrapper script:

#!/bin/bash

# Start the first process
./my_first_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_first_process: $status"
  exit $status
fi

# Start the second process
./my_second_process -D
status=$?
if [ $status -ne 0 ]; then
  echo "Failed to start my_second_process: $status"
  exit $status
fi

# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container will exit with an error
# if it detects that either of the processes has exited.
# Otherwise it will loop forever, waking up every 60 seconds

while /bin/true; do
  ps aux |grep my_first_process |grep -q -v grep
  PROCESS_1_STATUS=$?
  ps aux |grep my_second_process |grep -q -v grep
  PROCESS_2_STATUS=$?
  # If the greps above find anything, they will exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit -1
  fi
  sleep 60
done

Next, the Dockerfile:

FROM ubuntu:latest
COPY my_first_process my_first_process
COPY my_second_process my_second_process
COPY my_wrapper_script.sh my_wrapper_script.sh
CMD ./my_wrapper_script.sh
3
votes

I agree with the other answers that using two containers is preferable, but if you have your heart set on bunding multiple services in a single container you can use something like supervisord.

in Hipache for instance, the included Dockerfile runs supervisord, and the file supervisord.conf specifies for both hipache and redis-server to be run.

1
votes

If a dedicated script seems like too much overhead, you can spawn separate processes explicitly with sh -c. For example:

CMD sh -c 'mini_httpd -C /my/config -D &' \
 && ./content_computing_loop