1
votes

I have got an issue with deploy some pods on my k8s node. The error is following:

Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7da8bce09dd6820a65754073b1b4e52e640291dcb82f1da87ae99570c6964d1b" network for pod "webservices-8675d4667d-7mdf9": networkPlugin cni failed to set up pod "webservices-8675d4667d-7mdf9_default" network: Get https://[10.233.0.1]:443/api/v1/namespaces/default: dial tcp 10.233.0.1:443: i/o timeout

However, some pods are deployed, for example kubernetes-dashboard:

enter image description here

Update:

NAME                   STATUS   ROLES    AGE     VERSION   LABELS
k8s-master.mariyo.eu   Ready    master   3d15h   v1.16.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master.mariyo.eu,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s-node-1.mariyo.eu   Ready    <none>   3d15h   v1.16.6   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node-1.mariyo.eu,kubernetes.io/os=linux

Deployment for coredns:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: coredns
  namespace: kube-system
  selfLink: /apis/apps/v1/namespaces/kube-system/deployments/coredns
  uid: bd5451ec-2a33-443d-8519-ffcec935ac0c
  resourceVersion: '397508'
  generation: 2
  creationTimestamp: '2020-01-24T16:14:37Z'
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-dns
    kubernetes.io/cluster-service: 'true'
    kubernetes.io/name: coredns
  annotations:
    deployment.kubernetes.io/revision: '1'
    kubectl.kubernetes.io/last-applied-configuration: >
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"coredns"},"name":"coredns","namespace":"kube-system"},"spec":{"selector":{"matchLabels":{"k8s-app":"kube-dns"}},"strategy":{"rollingUpdate":{"maxSurge":"10%","maxUnavailable":0},"type":"RollingUpdate"},"template":{"metadata":{"annotations":{"seccomp.security.alpha.kubernetes.io/pod":"docker/default"},"labels":{"k8s-app":"kube-dns"}},"spec":{"affinity":{"nodeAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"preference":{"matchExpressions":[{"key":"node-role.kubernetes.io/master","operator":"In","values":[""]}]},"weight":100}]},"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchLabels":{"k8s-app":"kube-dns"}},"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"args":["-conf","/etc/coredns/Corefile"],"image":"docker.io/coredns/coredns:1.6.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"failureThreshold":10,"httpGet":{"path":"/health","port":8080,"scheme":"HTTP"},"successThreshold":1,"timeoutSeconds":5},"name":"coredns","ports":[{"containerPort":53,"name":"dns","protocol":"UDP"},{"containerPort":53,"name":"dns-tcp","protocol":"TCP"},{"containerPort":9153,"name":"metrics","protocol":"TCP"}],"readinessProbe":{"failureThreshold":10,"httpGet":{"path":"/ready","port":8181,"scheme":"HTTP"},"successThreshold":1,"timeoutSeconds":5},"resources":{"limits":{"memory":"170Mi"},"requests":{"cpu":"100m","memory":"70Mi"}},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["all"]},"readOnlyRootFilesystem":true},"volumeMounts":[{"mountPath":"/etc/coredns","name":"config-volume"}]}],"dnsPolicy":"Default","nodeSelector":{"beta.kubernetes.io/os":"linux"},"priorityClassName":"system-cluster-critical","serviceAccountName":"coredns","tolerations":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"},{"key":"CriticalAddonsOnly","operator":"Exists"}],"volumes":[{"configMap":{"items":[{"key":"Corefile","path":"Corefile"}],"name":"coredns"},"name":"config-volume"}]}}}}
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: docker/default
    spec:
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
              - key: Corefile
                path: Corefile
            defaultMode: 420
      containers:
        - name: coredns
          image: 'docker.io/coredns/coredns:1.6.0'
          args:
            - '-conf'
            - /etc/coredns/Corefile
          ports:
            - name: dns
              containerPort: 53
              protocol: UDP
            - name: dns-tcp
              containerPort: 53
              protocol: TCP
            - name: metrics
              containerPort: 9153
              protocol: TCP
          resources:
            limits:
              memory: 170Mi
            requests:
              cpu: 100m
              memory: 70Mi
          volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
              scheme: HTTP
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: 8181
              scheme: HTTP
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 10
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - all
            readOnlyRootFilesystem: true
            allowPrivilegeEscalation: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: Default
      nodeSelector:
        beta.kubernetes.io/os: linux
      serviceAccountName: coredns
      serviceAccount: coredns
      securityContext: {}
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 100
              preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: In
                    values:
                      - ''
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  k8s-app: kube-dns
              topologyKey: kubernetes.io/hostname
      schedulerName: default-scheduler
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: CriticalAddonsOnly
          operator: Exists
      priorityClassName: system-cluster-critical
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 10%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 2
  replicas: 2
  updatedReplicas: 2
  readyReplicas: 1
  availableReplicas: 1
  unavailableReplicas: 1
  conditions:
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-01-24T16:14:42Z'
      lastTransitionTime: '2020-01-24T16:14:37Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "coredns-58687784f9" has successfully progressed.
    - type: Available
      status: 'False'
      lastUpdateTime: '2020-01-27T17:42:57Z'
      lastTransitionTime: '2020-01-27T17:42:57Z'
      reason: MinimumReplicasUnavailable
      message: Deployment does not have minimum availability.

Deployment for webservices:

kind: Deployment
apiVersion: apps/v1
metadata:
  name: webservices
  namespace: default
  selfLink: /apis/apps/v1/namespaces/default/deployments/webservices
  uid: da75d3d8-92f4-4d06-86d6-e2fb325806a5
  resourceVersion: '398529'
  generation: 1
  creationTimestamp: '2020-01-27T08:05:16Z'
  labels:
    run: webservices
  annotations:
    deployment.kubernetes.io/revision: '1'
spec:
  replicas: 5
  selector:
    matchLabels:
      run: webservices
  template:
    metadata:
      creationTimestamp: null
      labels:
        run: webservices
    spec:
      containers:
        - name: webservices
          image: nginx
          ports:
            - containerPort: 80
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 1
  replicas: 5
  updatedReplicas: 5
  unavailableReplicas: 5
  conditions:
    - type: Available
      status: 'False'
      lastUpdateTime: '2020-01-27T08:05:16Z'
      lastTransitionTime: '2020-01-27T08:05:16Z'
      reason: MinimumReplicasUnavailable
      message: Deployment does not have minimum availability.
    - type: Progressing
      status: 'False'
      lastUpdateTime: '2020-01-27T17:52:58Z'
      lastTransitionTime: '2020-01-27T17:52:58Z'
      reason: ProgressDeadlineExceeded
      message: ReplicaSet "webservices-8675d4667d" has timed out progressing.
3
could you please add the output of kubectl get nodes --show-labels and your deployment configTummala Dhanvi
@TummalaDhanvi look at updated question, thxMariyo

3 Answers

1
votes

Finally, I decided to reinstall nodes from Debian 10 to Ubuntu 18.04 and everything works as expected.

Thank you for your time

0
votes

Problem is that kube-proxy isn't functioning correctly as I believe the 10.233.0.1 is the kubernetes api service address which it is responsible for configuring/setting up. You should check kube-proxy logs and see that it is healthy and create the iptables rules for the kubernetes services.

Take a look here: calico-timeout-pod.

0
votes

I had to set the following on the worker node as well, before joining it, for it to work: sudo sysctl net.bridge.bridge-nf-call-iptables=1