1
votes

I have a situation where I have zero endpoints available for one service. To test this, I specially crafted a yaml descriptor that uses a simple node server to set and retrieve the ready/live status for a pod:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nodejs-deployment
  labels:
    app: nodejs
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nodejs
  template:
    metadata:
      labels:
        app: nodejs
    spec:
      containers:
      - name: nodejs
        image: nodejs_server
        ports:
        - containerPort: 8080
        livenessProbe:
          httpGet:
            path: /is_alive
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 3
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /is_ready
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 3
          periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
  name: nodejs-service
  labels:
    app: nodejs
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nodejs
---    
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nodejs-ingress
spec:
  backend:
    serviceName: nodejs-service
    servicePort: 80

The node server has methods to set and retrieve the liveness and readiness.

When the app start I can see that 3 replicas are created and the status of them is ready. OK then now I trigger manually the status of their readiness to set to false [from outside the ingress]. One pod is correctly removed from the endpoint so no traffic is routed to it[that's OK as this is the expected behavior]. When I set all the ready-statuses to false for all pods the endpoints list is empty [still the expected behavior].

At that point I cannot set ready=true from outside the ingress as the traffic is not routed to any pod. Is there a way here for example of triggering a restart of the pod when the ready is not achieved after n-timer or n-seconds? Or when the endpoints list is empty?

1
If you want to restart, why not set replicas: 0, then back to replicas: 3Shahriar

1 Answers

2
votes

Well, that is perfectly normal and expected behaviour. What you can do, on the side, is to forward traffic from localhost to a particular pod with kubectl port-forward. That way you can access the pod directly, without ingresses etc. and set it's readiness back to ok. If you want to restart when host it not ready for to long, just use the same endpoint for liveness probe, but trigger it after more tries.