0
votes

I've seen conflicting information on what all of these things do. For example:

helm install --dry-run --debug or helm template --debug: We've seen this trick already. It's a great way to have the server render your templates, then return the resulting manifest file.

From: https://helm.sh/docs/chart_template_guide/debugging/#helm

That implies that both template --debug and the dry run send it to the server. Is that true?

I've also seen some places that if you have a schema, that template --validate will also do linting. Is that true? And does the dry run also lint?

Here's my "guess":

  • helm template calls lint even if you don't add --validate
    • And helm template --debug does not send it to the server, but just prints out more debug info
  • --validate does nothing that isn't done with the regular template call
  • helm install --dry-run will send each YAML generated to K8s with the following command(s): kubectl apply --validate=true --dry-run=true --f myyaml.yaml
    • Is this correct? Is that how Helm does a dry run? (And in helm 3 there's no Tiller)
1

1 Answers

3
votes

Short answers:

  1. helm template without --validate doesn't contact the Kubernetes server at all. helm template --validate and helm install --dry-run do some additional checks that do involve contacting the API server.
  2. helm lint is different and neither command runs linking.

Under the hood, helm install and helm template are very similar: both create an action.Install object and configure it.

helm template is always --dry-run. If you don't specify helm template --validate, then Helm uses a default set of API versions, and in fact renders the chart without contacting a Kubernetes server at all. If the chart includes custom resource definitions (CRDs), helm template without --validate won't complain that they're not being processed. The key important effect of helm template --debug is that, if the template produces invalid YAML, it will get printed out anyways.

helm install --dry-run --debug and helm install --validate seem extremely similar, in terms of the options they push into the core installer logic. In both cases they actually render the chart without talking to the Kubernetes server. After doing the render, they do check with the Kubernetes client that the produced YAML is valid for what objects the cluster supports, and they both check whether any of the created objects currently exist in the cluster.

Helm doesn't actually run kubectl. It instead directly uses the Kubernetes Go client library.

helm lint is a totally separate action. It runs additional checks on the unrendered chart; for example, if there is a file in the templates directory that's not a *.tpl, *.yml, *.yaml, or *.txt file, you'll get a complaint. None of the install or template paths run it.