0
votes

I am trying to deploy an application via GKE. As far I created two services and two deployments for the front and the back for the App . I created an ingress ressource using "gce" controller and I mapped the services as shown

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    app: app
    part: ingress
  name: my-irool-ingress
  annotations:
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: my-ip
spec:
 backend:
    serviceName: client-svc
    servicePort: 3000
 rules:
  - http:
        paths:
        - path: /back
          backend:
            serviceName: back-svc
            servicePort: 9000
  - http:
        paths:
        - path: /back/*
          backend:
            serviceName: back-svc
            servicePort: 9000

It worked almost fine ( not all the root where mapped correctly but it worked). I added modification on the code ( only the code of the application ) and I rebuild the images and recreated the services, but the ingress seemed angry with the modifications I have added and

all my services became in the unhealthy state

This is the front service

apiVersion: v1
kind: Service
metadata:
  labels:
    app: app
    part: front
  name: client
  namespace: default
spec:
  type: NodePort
  ports:
  - nodePort: 32585
    port: 3000
    protocol: TCP
  selector:
     app: app
     part: front

when I do a describe , I got nothing beside that my services are unhealthy. And in the moment of creation I keep getting

Warning GCE 6m loadbalancer-controller googleapi: Error 409: The resource '[project/idproject]/global/healthChecks/k8s-be-32585--17c7......01' already exists, alreadyExists

My question is:

  • What is wrong about the code showed above? Should I map all the services to the port 80 ( default ingress port so it could work? )

  • What are the readinessProbe and livenessProbe? Should I add them or mapping one to the services to default backend should be enough ?

1

1 Answers

0
votes

For your first question, deleting and re-creating the ingress may resolve the issue. For the second question, you can review the full steps of configuring Liveness and Readiness probes here. Furthermore, as defined here (as an example for a pod):

livenessProbe: Indicates whether the Container is running. If the liveness probe fails, the kubelet kills the Container, and the Container is subjected to its restart policy. If a Container does not provide a liveness probe, the default state is Success.

And readinessProbe: Indicates whether the Container is ready to service requests. If the readiness probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all Services that match the Pod. The default state of readiness before the initial delay is Failure. If a Container does not provide a readiness probe, the default state is Success.