2
votes

I have Minikube (v1.1.0) running locally with Helm (v2.13.1) initialized and connected the local docker daemon with Minikube running eval $(minikube docker-env). In the code base of my application I created a chart with helm create chart. The first few lines of ./chart/values.yml I changed to:

image:
  repository: app-development
  tag: latest
  pullPolicy: Never

I build the image locally and install/upgrade the chart with Helm:

docker build . -t app-development
helm upgrade --install example ./chart

Now, this works perfect the first time, but if I make changes to the application I would like to run the above two commands to upgrade the image. Is there any way to get this working?

workaround

To get the expected behaviour I can delete the chart from Minikube and install it again:

docker build . -t app-development
helm del --purge example
helm install example ./chart
2

2 Answers

5
votes

When you make a change like this, Kubernetes is looking for some change in the Deployment object. If it sees that you want 1 Pod running app-development:latest, and it already has 1 Pod running an image named app-development:latest, then it's in the right state and it doesn't need to do anything (even if the local image that has that tag has changed).

The canonical advice here is to never use the :latest tag with Kubernetes. Every time you build an image, use a distinct tag (a time stamp or the current source control commit ID are easy unique things). With Helm it's easy enough to inject this based on a value you pass in:

image: app-development:{{ .Values.tag | default "latest" }}

This sort of build sequence would look a little more like

TAG=$(date +%Y%m%d-%H%m%S)
docker build -t "app-development:$TAG" .
helm upgrade --install --set "tag=$TAG"

If you're actively developing your component you may find it easier to try to separate out "hacking on code" from "deploying into Kubernetes" as much as you can. Some amount of this tends to be inevitable, but Kubernetes really isn't designed to be a live development environment.

0
votes

One way you could solve this problem is using minikube and cloud code from google. When you initialize cloud code in your project, it creates skaffold yaml at root location. You can put helm chart for same project in the same code base. Go ahead and edit this configuration to match folder location for the helm chart:

deploy: helm:
releases:
- name: <chart_name>
  chartPath: <folder path relative to this file>

now when you click on cloud code at the bottom of your visual code editor (or any editor), it should give you following options: [1]: https://i.stack.imgur.com/vXK4U.png Select "Run on Kubernetes" from the list.

Only changes you'll have to do in your helm chart is to read image url from Skaffold yaml using profile.

profiles:
- name: prod
  deploy:
    helm:
      releases:
      - name: <helm_chart_name>
        chartPath: helm
        skipBuildDependencies: true
        artifactOverrides:
          image: <url_production_image_url>

This will read image from configured url whereas in local, it should read from docker daemon. Cloud code also provide hot update / deployment when you make any changes to any file though.No need to always mention image tag while testing it locally. Once you're good with the code, update the image with latest version number which should trigger deployment in your integration / dev environment.