2
votes

I am creating a Helm chart that depends on several Helm charts that are not maintained by me, and I would like to make some configurations to these subcharts. The configurations are not too complex, I just want to add a several environmental variables to each of the containers. However, the env fields of the containers are not already templated out in the Helm charts. I would like to avoid forking these charts and maintaining them myself since this is such a trivial change.

Is there an easy way to provide environmental variables to several containers in Kubernetes in a flexible way, either through Helm or another tool?

I am currently looking into using Kustomize to do the last mile changes after Helm fills out the templates, but I am getting hung up on setting up Kustomize patches. In my scenario, I have the environmental variables being filled out by Helm in a ConfigMap. I would like to add an envFrom field to read the ConfigMap and add the given environment variables to the containers. I want to add the envFrom to the resource YAML files through Kustomize. The snag I am hitting is that Kustomize patch.yaml files are resource specific. Below is an example of my patch.yaml and my kustomization.yaml respectively.

patch.yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: does-not-matter
spec:
  template:
    spec:
      containers:
        - name: server
          envFrom:
          - configMapRef:
              name: my-env

kustomization.yaml:

resources:
  - all.yaml

patches:
  - path: patch.yaml
    target:
      kind: "StatefulSet"
      name: "*"

To perform the Kustomization, I run:

helm install perceptor ../  --post-renderer ./kustomize

Which basically just fills out the Helm templates and passes them to Kustomize to do the last-mile patches.

In the patch, I have to specify the name of the container ("server") to properly inject my configMap. What I would really like to do is be able to provide those environment variables to all containers in a given deployment (as defined by the target constraints in kustomization.yaml), regardless of their name. From what I have seen, it almost looks like I will have to write a separate patch for each container, which is suboptimal. I just start working with Kubernetes, so it is possible that I am missing something that would easily solve this problem.

1
Right after posting this, I came across kubernetes.io/docs/tasks/inject-data-application/podpreset. This may actually be the solution I am looking for.James Mchugh
This is a nice solution. Post it as an answer so others can make use of it in the future.Mark Watney
@mWatney I will soon. Before posting it as a solution, I wanted to confirm that it worked as needed.James Mchugh
@mWatney As it turns out, the PodPreset solution did not work. PodPresets only work with pods, not pod templates. In my case, I needed to add the env vars to pods in a StatefulSet, which uses pod templates. PodPresets do not work for this.James Mchugh

1 Answers

2
votes

I understand, that you don't want to break the open/closed principle of subchart your umbrella chart depends on by forking it, but still you have a right to propose a changes to it by making it more extension-able and flexible. Yes, I would suggest you to submit a Pull Request/request a new Feature to the helm chart project in context.

The following code snippet won't break current functionality, and give users a chance to introduce custom environment variables based on existing ConfigMap(s) in desired resource's Spec.

helm_template.yaml

 #helm template
 ...

 env:
- name: POD_NAME
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
- name: POD_NAMESPACE
  valueFrom:
    fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
{{- if .Values.envConfigs }}
{{- range $key, $config := $.Values.envConfigs }}
- name: {{ $key }}
  valueFrom:
    configMapKeyRef:
      name: {{ $config }}
      key: {{ $key | quote }}
{{- end }}
{{- end }}

values.yaml

#
# values.yaml
#
envConfigs:
  Q3_CFG_MAP: Q3DM17
  Q3_CFG_TIMEOUT: 30

# if empty use: 
# envConfigs: {}