2
votes

I have a kubernetes cluster hosted on Google Cloud, several deployments + services and an ingress (gce). The services, deployments and pods are up and running, however the ingress indicates unhealthy status for almost all backend-servces (ingress.kubernetes.io/backends): seems it creates N+1 backend services (N = number of services or deployments) and only one is healthy.

Liveness and readiness probes are exist and they work fine (services indicate ready and healthy state and 0 restarts). Added root handler with 200 OK status (on / path). Services type is NodePort. Port and target port - 443. TLS certificate works and attached to ingress.

I expect all service should be healthy.

Below the example of the YAML configuration

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dummy-service
  labels:
    app: dummy-app
spec:
  selector:
    matchLabels:
      app: dummy-app
  template:
    metadata:
      labels:
        app: dummy-app
    spec:
      containers:
      - name: dummy-service
        image: xx.gcr.io/dummy-project/dummy-service:latest
        resources:
          limits:
            memory: "128Mi"
            cpu: "100m"
        ports:
        - containerPort: 443
        livenessProbe:
          httpGet:
            path: /health
            port: 443
          initialDelaySeconds: 90
          periodSeconds: 60
        readinessProbe:
          httpGet:
            path: /health
            port: 443
          initialDelaySeconds: 90
          periodSeconds: 60
        env:
        - name: ASPNETCORE_ENVIRONMENT
          value: Production
        - name: ASPNETCORE_URLS
          value: https://*:443;http://*:80
        - name: ASPNETCORE_Kestrel__Certificates__Default__Password
          value: ""
        - name: ASPNETCORE_Kestrel__Certificates__Default__Path
          value: dummy_tls_certificate.pfx

# There are several deployments with the same configuration
# Only name different

---

apiVersion: v1
kind: Service
metadata:
  name: dummy-service
spec:
  selector:
    app: dummy-app
  ports:
  - port: 443
  type: NodePort

# There are several services with the same configuration
# Only name different

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: www
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "my-dummy-hostname.com"
    ingress.kubernetes.io/add-base-url: "true"
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "dummy-static-ip-address-name"
spec:
  tls:
  - hosts:
    - my-dummy-hostname.com
    secretName: dummy-tls-secret
  rules:
  - host: my-dummy-hostname.com
    http:
      paths:
      - path: /api/dummy
        backend:
          serviceName: dummy-service
          servicePort: 443
      # Example of other service
      - path: /api/yet_another_dummy
        backend:
          serviceName: yet-another-dummy-service
          servicePort: 443
1
Can you post yaml of your ingress, service and liveness and rediness probe?Crou
Hello, sure, edited the question. Thanks.Lex

1 Answers

2
votes

Seems something strange was with TLS, backend-service tries to check health from the "localhost context" and the certificate is signed for the specific domain name. Also I had to change servicePort to 80 to make it work (HTTPS connection still exists).