2
votes

I am able to deploy the web app to AKS cluster manually through commands (using kubectl to deploy YAML files) after pushing it to Docker Hub but when I try to automate the same through CI/CD pipelines, the app is not properly deployed to AKS. There is no error recorded at CI/CD level but the app deployment is not successful in AKS. The error at AKS dashboard is- “Back-off restarting failed container” . In the events for the pod, one of the event says- "Container image image_name already present on machine"

The YAML file used for MVC app and service deployment is below.

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: some_name
spec:
  selector:
    matchLabels:
      app: mvc
  replicas: 1 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: mvc
    spec:
      containers:
      - name: mvc
        image: my_image_in_DockerHub
        ports:
        - containerPort: 80
        resources:
          limits:
           cpu: "1"
            memory: "200Mi"
          requests:
            cpu: "0.1"
            memory: "100Mi"
        env:
          - name: ConnectionStrings__ProductsContext
            valueFrom:
              secretKeyRef:
                name: some_name_for_secret
                key: some_name_for_db_connection
---
kind: Service
apiVersion: v1
metadata:
      name: some_name
spec:
  selector:
    app: mvc
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer

The log is provided below as a result of:-

kubectl --v=8 logs pod-name

kubectl : I0504 00:39:54.755870   17784 loader.go:359] Config loaded from 
file:  C:\Users\chaitanya/.kube/config
At line:1 char:1
+ kubectl --v=8 logs pod_name
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : NotSpecified: (I0504 
00:39:54....um/.kube/config:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError

I0504 00:39:54.804850   17784 round_trippers.go:416] GET 
https://aks-clustername--dns- 
1f3c4666.hcp.eastus2.azmk8s.io:443/api/v1/namespaces/default/pods/pod-name
I0504 00:39:54.804850   17784 round_trippers.go:423] Request Headers:
I0504 00:39:54.804850   17784 round_trippers.go:426]     Accept: 
application/json, */*
I0504 00:39:54.804850   17784 round_trippers.go:426]     User-Agent: 
kubectl.exe/v1.15.5 (windows/amd64) kubernetes/20c265f
I0504 00:39:54.804850   17784 round_trippers.go:426]     Authorization: 
Bearer 
70327cca50fd26087cad64e1eb48590ebf8c159c3b81c033fa20d806b72960a39d3a3a22
de97c8a90 8c6189918ef79aac4637cb2e0927994cfa0c6f5f514bcc8
I0504 00:39:56.178254   17784 round_trippers.go:441] Response Status: 200 OK 
in 1373 milliseconds
I0504 00:39:56.178254   17784 round_trippers.go:444] Response Headers:
I0504 00:39:56.178254   17784 round_trippers.go:447]     Audit-Id: 59d5b365- 
7cb4-455a-abac-82c3079053e9
I0504 00:39:56.178254   17784 round_trippers.go:447]     Content-Type: 
application/json
I0504 00:39:56.178254   17784 round_trippers.go:447]     Content-Length: 3297
I0504 00:39:56.178254   17784 round_trippers.go:447]     Date: Sun, 03 May 
2020 19:09:56 GMT
I0504 00:39:56.184253   17784 request.go:947] Response Body: 
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"pod- 
name","generateName":"some-name-79c6dff5c5-",
"namespace":"default","selfLink":"/api/v1/namespaces/default/pods/pod- 
name","uid":"f461a90f-0fbc-4f1d-b2fc- 
8c539fe70426","resourceVersion":"153797",
"creationTimestamp":"2020-05-03T18:55:50Z","labels":{"app":"mvc","pod- 
template-hash":"79c6dff5c5"},"ownerReferences": 
[{"apiVersion":"apps/v1","kind":"ReplicaSet",
"name":"some-name-79c6dff5c5","uid":"f5fae8a5-e3d6-44b9-98a2- 
13d197789a9f","controller":true,"blockOwnerDeletion":true}]},"spec": 
{"volumes":[{"name":"default-token-xpdcx",
"secret":{"secretName":"default-token- 
xpdcx","defaultMode":420}}],"containers": 
[{"name":"mvc","image":"my_image_in_DockerHub","ports": 
[{"containerPort":80,"protocol":"TCP"}],
"env":[{"name":"ConnectionStrings__ProductsContext","valueFrom":{"secretKeyRef": 
{"name":"some_name_for_secret","key":"some_name_for_db_connection"}}}],"resources":{"limits":{"cpu": 
[truncated 2273 chars]
I0504 00:39:56.192255   17784 round_trippers.go:416] GET 
https://aks-clustername-dns- 
1f3c4666.hcp.eastus2.azmk8s.io:443/api/v1/namespaces/default/pods/pod- 
name/log
I0504 00:39:56.192255   17784 round_trippers.go:423] Request Headers:
I0504 00:39:56.192255   17784 round_trippers.go:426]     Accept: 
application/json, */*
I0504 00:39:56.192255   17784 round_trippers.go:426]     User-Agent: 
kubectl.exe/v1.15.5 (windows/amd64) kubernetes/20c265f
I0504 00:39:56.192255   17784 round_trippers.go:426]     Authorization: 
Bearer 
70327cca50fd26087cad64e1eb48590ebf8c159c3b81c033fa20d806b72960a39d
3a3a22de97c8a908c6189918ef79aac4637cb2e0927994cfa0c6f5f514bcc8
I0504 00:39:56.482026   17784 round_trippers.go:441] Response Status: 200 OK 
in 289 milliseconds
I0504 00:39:56.483003   17784 round_trippers.go:444] Response Headers:
I0504 00:39:56.483003   17784 round_trippers.go:447]     Content-Type: 
text/plain
I0504 00:39:56.483003   17784 round_trippers.go:447]     Date: Sun, 03 May 
2020 19:09:56 GMT
I0504 00:39:56.483003   17784 round_trippers.go:447]     Audit-Id: 149a7b4d- 
562c-4376-adc8-bf7c7a3cfdf5

kubectl describe pod pod-name

Name:           pod-name
Namespace:      default
Priority:       0
Node:           xxx-agentpool-xxxxxxxxxx-1/10.240.0.4
Start Time:     Mon, 04 May 2020 00:25:50 +0530
Labels:         app=mvc
                pod-template-hash=79c6dff5c5
Annotations:    <none>
Status:         Running
IP:             10.244.1.16
Controlled By:  ReplicaSet/pod-79c6dff5c5
Containers:
  mvc:
    Container ID:   docker://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
    Image:          my_image_in_DockerHub
    Image ID:       docker-pullable://xxxxxxxxxxxxxxxx/xxxxxxx@sha256:5779639a3067e0e31f548761ed3f27ab4c8ce64ac5fa9132ff800a2107969fed
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 04 May 2020 00:31:47 +0530
      Finished:     Mon, 04 May 2020 00:31:47 +0530
    Ready:          False
    Restart Count:  6
    Limits:
      cpu:     1
      memory:  200Mi
    Requests:
      cpu:     100m
      memory:  100Mi
    Environment:
      ConnectionStrings__ProductsContext:  <set to the key 'some_name_for_db_connection' in secret 'some_name_for_secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xpdcx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-xpdcx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xpdcx
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From                               Message
  ----     ------     ----                    ----                               -------
  Normal   Scheduled  8m52s                   default-scheduler                  Successfully assigned default/pod-name to xxx-agentpool-xxxxxxxx-1
  Normal   Pulling    8m51s                   kubelet, xxx-agentpool-xxxxxxxx-1  Pulling image "my_image_in_DockerHub"
  Normal   Pulled     8m50s                   kubelet, xxx-agentpool-xxxxxxxx-1  Successfully pulled image "my_image_in_DockerHub"
  Normal   Created    7m13s (x5 over 8m50s)   kubelet, xxx-agentpool-xxxxxxxx-1  Created container mvc
  Normal   Started    7m13s (x5 over 8m49s)   kubelet, xxx-agentpool-xxxxxxxx-1  Started container mvc
  Normal   Pulled     7m13s (x4 over 8m48s)   kubelet, xxx-agentpool-xxxxxxxx-1  Container image "my_image_in_DockerHub" already present on machine
  Warning  BackOff    3m50s (x25 over 8m47s)  kubelet, xxx-agentpool-xxxxxxxx-1  Back-off restarting failed container
1
The file seems all right. Are you sure the image was deployed into the AKS cluster? And can you share the logs of the pods?Charles Xu
@CharlesXu Have provided logs in the main questionChaitanya
@CharlesXu the error- "Back-off restarting failed container" is received while deploying in AKS Cluster.Chaitanya
It seems the pod works fine and does not report the errors. Can you check the pods status and give all the output of the command kubectl describe podname?Charles Xu
@CharlesXu have appended the result for kubectl describe pod pod-nameChaitanya

1 Answers

0
votes

This may be not the solution, but should help in someway! As the first step

To find all the names of your pods run: kubectl get pods

NAME         READY   STATUS    RESTARTS   AGE
webapp       1/1     Running   15         47h

Find the pod which is failing or having that status, Then run

𝙠𝙪𝙗𝙚𝙘𝙩𝙡 𝙡𝙤𝙜𝙨 𝙥𝙤𝙙𝙣𝙖𝙢𝙚

To get the actual error and fix it