167
votes

I'm now trying to run a simple container with shell (/bin/bash) on a Kubernetes cluster.

I thought that there was a way to keep a container running on a Docker container by using pseudo-tty and detach option (-td option on docker run command).

For example,

$ sudo docker run -td ubuntu:latest

Is there an option like this in Kubernetes?

I've tried running a container by using a kubectl run-container command like:

kubectl run-container test_container ubuntu:latest --replicas=1

But the container exits for a few seconds (just like launching with the docker run command without options I mentioned above). And ReplicationController launches it again repeatedly.

Is there a way to keep a container running on Kubernetes like the -td options in the docker run command?

12
Using this image (as Kubernetes docs suggests) is quite handy: kubectl run curl --image=radial/busyboxplus:curl -i --ttyMatheus Santana
This question has been mentioned at this video: Kubernetes the very hard way at Datadog with a slide-title of "Cargo culting. From wikipedia: The term cargo cult programmer may apply when an unskilled or novice computer programmer (or one inexperienced with the problem at hand) copies some program code from one place to another with little or no understanding of how it works or whether it is required in its new position.tgogos

12 Answers

67
votes

A container exits when its main process exits. Doing something like:

docker run -itd debian

to hold the container open is frankly a hack that should only be used for quick tests and examples. If you just want a container for testing for a few minutes, I would do:

docker run -d debian sleep 300

Which has the advantage that the container will automatically exit if you forget about it. Alternatively, you could put something like this in a while loop to keep it running forever, or just run an application such as top. All of these should be easy to do in Kubernetes.

The real question is why would you want to do this? Your container should be providing a service, whose process will keep the container running in the background.

206
votes

Containers are meant to run to completion. You need to provide your container with a task that will never finish. Something like this should work:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
spec:
  containers:
  - name: ubuntu
    image: ubuntu:latest
    # Just spin & wait forever
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]
145
votes

You could use this CMD in your Dockerfile:

CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"

This will keep your container alive until it is told to stop. Using trap and wait will make your container react immediately to a stop request. Without trap/wait stopping will take a few seconds.

For busybox based images (used in alpine based images) sleep does not know about the infinity argument. This workaround gives you the same immediate response to a docker stop like in the above example:

CMD exec /bin/sh -c "trap : TERM INT; sleep 9999999999d & wait"
38
votes
  1. In your Dockerfile use this command:

    CMD ["sh", "-c", "tail -f /dev/null"]
    
  2. Build your docker image.

  3. Push it to your cluster or similar, just to make sure the image it's available.
  4. kubectl run debug-container -it --image=<your-image>
    
28
votes

In order to keep a POD running it should to be performing certain task, otherwise Kubernetes will find it unnecessary, therefore it stops. There are many ways to keep a POD running.

I have faced similar problems when I needed a POD just to run continuously without doing any useful operation. The following are the two ways those worked for me:

  1. Firing up a sleep command while running the container.
  2. Running an infinite loop inside the container.

Although the first option is easier than the second one and may suffice the requirement, it is not the best option. As, there is a limit as far as the number of seconds you are going to assign in the sleep command. But a container with infinite loop running inside it never exits.

However, I will describe both the ways(Considering you are running busybox container):

1. Sleep Command

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  containers:
  - name: busybox
    image: busybox
    ports:
    - containerPort: 80
    command: ["/bin/sh", "-ec", "sleep 1000"]

2. Infinite Loop

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  labels:
    app: busybox
spec:
  containers:
  - name: busybox
    image: busybox
    ports:
    - containerPort: 80
    command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]

Run the following command to run the pod:

kubectl apply -f <pod-yaml-file-name>.yaml

Hope it helps!

20
votes

The simplest command as it can be for k8s pod manifest to run container forever:

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
spec:
  containers:
  - name: ubuntu
    image: ubuntu:latest
    # Just sleep forever
    command: [ "sleep" ]
    args: [ "infinity" ]
13
votes

I was able to get this to work with the command sleep infinity in Kubernetes, which will keep the container open. See this answer for alternatives when that doesn't work.

6
votes

Use this command inside you Dockerfile to keep the container running in your K8s cluster:

  • CMD tail -f /dev/null
5
votes

My few cents on the subject. Assuming that kubectl is working then the closest command that would be equivalent to the docker command that you mentioned in your question, would be something like this.

$ kubectl run ubuntu --image=ubuntu --restart=Never --command sleep infinity

Above command will create a single Pod in default namespace and, it will execute sleep command with infinity argument -this way you will have a process that runs in foreground keeping container alive.

Afterwords, you can interact with Pod by running kubectl exec command.

$ kubectl exec ubuntu -it -- bash

This technique is very useful for creating a Pod resource and ad-hoc debugging.

3
votes

In my case, a pod with an initContainer failed to initialize. Running docker ps -a and then docker logs exited-container-id-here gave me a log message which kubectl logs podname didn't display. Mystery solved :-)

2
votes

There are many different ways for accomplishing this, but one of the most elegant one is:

kubectl run -i --tty --image ubuntu:latest ubuntu-test --restart=Never --rm /bin/sh
0
votes

I did a hack by putting it in background:

[root@localhost ~]# kubectl run hello -it --image ubuntu -- bash &
[2] 128461

Exec on pod hello

[root@localhost ~]# kubectl exec -it hello -- whoami
root
[root@localhost ~]# kubectl exec -it hello -- hostname
hello

Getting a shell

[root@localhost ~]# kubectl exec -it hello -- bash
root@hello:/# ls
bin  boot  dev  etc  home  lib  lib32  lib64  libx32  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var