1
votes

I have a docker container that is running fine when I run it using docker run. I am trying to put that container inside a pod but I am facing issues. The first run of the pod shows status as "Completed". And then the pod keeps restarting with CrashLoopBackoff status. The exit code however is 0.

Here is the result of kubectl describe pod :

Name:           messagingclientuiui-6bf95598db-5znfh
Namespace:      mgmt
Node:           db1mgr0deploy01/172.16.32.68
Start Time:     Fri, 03 Aug 2018 09:46:20 -0400
Labels:         app=messagingclientuiui
            pod-template-hash=2695115486
Annotations:    <none>
Status:         Running
IP:             10.244.0.7
Controlled By:  ReplicaSet/messagingclientuiui-6bf95598db
Containers:
  messagingclientuiui:
    Container ID:   docker://a41db3bcb584582e9eacf26b02c7ef26f57c2d43b813f44e4fd1ba63347d3fc3
Image:          172.32.1.4/messagingclientuiui:667-I20180802-0202
Image ID:       docker-pullable://172.32.1.4/messagingclientuiui@sha256:89a002448660e25492bed1956cfb8fff447569e80ac8b7f7e0fa4d44e8abee82
Port:           9087/TCP
Host Port:      0/TCP
State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Fri, 03 Aug 2018 09:50:06 -0400
  Finished:     Fri, 03 Aug 2018 09:50:16 -0400
Ready:          False
Restart Count:  5
Environment Variables from:
  mesg-config  ConfigMap  Optional: false
Environment:     <none>
Mounts:
  /docker-mount from messuimount (rw)
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-2pthw (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  messuimount:
    Type:          HostPath (bare host directory volume)
    Path:          /mon/monitoring-messui/docker-mount
    HostPathType:
  default-token-2pthw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2pthw
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age              From                            Message
  ----     ------                 ----             ----                      -------
  Normal   Scheduled              4m               default-scheduler         Successfully assigned messagingclientuiui-6bf95598db-5znfh to db1mgr0deploy01
  Normal   SuccessfulMountVolume  4m               kubelet, db1mgr0deploy01  MountVolume.SetUp succeeded for volume "messuimount"
  Normal   SuccessfulMountVolume  4m               kubelet, db1mgr0deploy01  MountVolume.SetUp succeeded for volume "default-token-2pthw"
  Normal   Pulled                 2m (x5 over 4m)  kubelet, db1mgr0deploy01  Container image "172.32.1.4/messagingclientuiui:667-I20180802-0202" already present on machine
  Normal   Created                2m (x5 over 4m)  kubelet, db1mgr0deploy01  Created container
  Normal   Started                2m (x5 over 4m)  kubelet, db1mgr0deploy01  Started container
  Warning  BackOff                1m (x8 over 4m)  kubelet, db1mgr0deploy01  Back-off restarting failed container

kubectl get pods

      NAME                              READY     STATUS             RESTARTS   AGE
messagingclientuiui-6bf95598db-5znfh   0/1       CrashLoopBackOff   9          23m

I am assuming we need a loop to keep the container running in this case. But I dont understand why it worked when it ran using docker and not working when it is inside a pod. Shouldnt it behave the same ?

How do we henerally debug CrashLoopBackOff status apart from running kubectl describe pod and kubectl logs

2
can you post the pod logs?jaxxstorm
There is nothing in the logsuser1722908
Shouldnt it behave the same well, I seriously doubt that you have a ConfigMap when running using docker. But, that aside, if there truly are no logs, then a great debugging trick is to change the command: to be command: ["sleep", "3600"] and then exec into that Pod and run the real entrypoint manually, possibly repeatedly, while trying to figure out why it is upsetmdaniel
If you try running docker logs while it's starting up (again) and you see nothing, docker logs --previous can be informative.David Maze
I added a sleep at the end of the actual command and it is working fine now. Would this be an ok solution ?user1722908

2 Answers

1
votes

The container would terminate with exit code 0 if there isn't at least one process running in the background. To keep the container running, add these to the deployment configuration:

  command: ["sh"]
  stdin: true

Replace sh with bash on any other shell that the image may have.

Then you can drop inside the container with exec:

 kubectl exec -it <pod-name> sh

Add -c <container-name> argument if the pod has more than one container.

0
votes

are you sure you run your software as docker run ... -d ... <command> and it kept running and you use the same exact command in your pod ? In some cases, if you compare things that run on docker with -it and no -d you might find your self in a pinch as they expect terminal to communicate with user and exit if tty is not available (hint: pod/container can be run with tty: true)

It is very unlikely that you have software that runs in a detached docker and does not in kube.