0
votes

I have a back-end service that I will control using Kubernetes (with a Helm chart). This back-end service connects to a data-base (MonogoDB, it so happens). There is no point in starting up the back-end service until the data-base is ready to receive a connection (the back-end will handle the missing data-base, by retrying, but it wastes resources and fills the log file with distracting error messages).

To do this I believe I could add an init container to my back-end, and have that init container wait (or poll) until the data-base is ready. It seems this is one of the intended uses of init containers

Because init containers run to completion before any app containers start, init containers offer a mechanism to block or delay app container startup until a set of preconditions are met.

That is, have the init container of my service do the same operations as the readiness probe of the data-base. That in turn means copying and pasting code from the configuration (Helm chart) of the data-base to the configuration (or Helm chart) of my back-end. Not ideal. Is there an easier way? Is there a way I can declare to Kubernetes that my service should not be started until the data-base is known to be ready?

1
The implementation of a readiness probe is part of the implementation of the container, so there's no easy way to copy this to another container. How can you copy the implementation of a readiness probe from a Helm chart?weibeld
@weibeld A copy-paste in a text editor. I don't mean some clever template manipulation.Raedwald
I mean how is the implementation of a readiness probe in the Helm chart? You want to copy-paste the readinessProbe.httpGet field?weibeld
@Raedwald did you try this solution or any different one?Mark

1 Answers

-1
votes

If I have understood you correctly. From Mongo DB point of view everything is working as expected using readinessprobe:

As per documentation:

The kubelet uses readiness probes to know when a Container is ready to start accepting traffic. A Pod is considered ready when all of its Containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

From the back-end point of view, you can use initcontainer - the one drawback is that when your back-end service will start once (after successfully initcontainer initialization) the DB pod will be ready to serve the traffic but when it will fail the back-end service will filling your errors messages - as previously.

So what I can propose to use solution described here.

In your back-end deployment you can combine additional readinessprobes to verify if your primary deployment is ready to serve the traffic you can use sidecar container to handle this process (verifying connection to the primary db service and writing f.e. info in static file per specific period of time). As an example please take a look at EKG library with mongoDBCheck sidecar. Or just simply exec command as the result of your script running inside your sidecar container:

readinessProbe:
  exec:
    command:
      - find
      - alive.txt
      - -mmin
      - '-1'
  initialDelaySeconds: 5
  periodSeconds: 15

Hope this help