21
votes

I'm very new to Kubernetes and using k8s v1.4, Minikube v0.15.0 and Spotify maven Docker plugin.
The build process of my project creates a Docker image and push it directly into the Docker engine of Minikube.

The pods are created by the Deployment I've created (using replica set), and the strategy was set to type: RollingUpdate.

I saw this in the documentation:

Note: a Deployment’s rollout is triggered if and only if the Deployment’s pod template (i.e. .spec.template) is changed.


I'm searching for an easy way/workaround to automate the flow: Build triggered > a new Docker image is pushed (withoud version changing) > Deployment will update the pod > service will expose the new pod.

3
If you aren't changing the image at all, then there is no way to ensure that you get the new image in each pod, unless you set ImagePullPolicy: Always and kill each pod and have the deployment recreate it. However, if you are creating a new docker image every time, it would make sense to update the tag as well. - Anirudh Ramanathan
@AnirudhRamanathan As I'm not creating a "new" image every time, just updating the image, I will go with the first approach, so there is a way to kill the old pods automatically? - yuval simhon
ImagePullPolicy: Always is not working with local images, so meanwhile i'm manually delete the pods with specific lable, then the replica set is creating them with the updated image. wondering if there is any way to do it automatically. - yuval simhon

3 Answers

26
votes

when not changing the container image name or tag you would just scale your application to 0 and back to the original size with sth like:

kubectl scale --replicas=0 deployment application
kubectl scale --replicas=1 deployment application

As mentioned in the comments already ImagePullPolicy: Always is then required in your configuration.

When changing the image I found this to be the most straight forward way to update the

kubectl set image deployment/application app-container=$IMAGE

Not changing the image has the downsite that you'll have nothing to fall back to in case of problems. Therefore I'd not suggest to use this outside of a development environment.


Edit: small bonus - keeping the scale in sync before and after could look sth. like:

replica_spec=$(kubectl get deployment/applicatiom -o jsonpath='{.spec.replicas}')
kubectl scale --replicas=0 deployment application
kubectl scale --replicas=$replica_spec deployment application

Cheers

6
votes

Use the following functionality if you have atleast a 1.15 version

kubectl rollout restart deployment/deployment-name

Read more about it here kubectl rollout restart

0
votes

I'm curious why you're not changing the image version (:

Another option (beside kubectl rollout restart) is to use kubectl patch:

kubectl patch deployment name -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"version\":\"$BUILD_SHA_OR_DATE\"}}}}}}"

With this command you have the flexibility to change specific fields in the deployment spec like the label selector, pod label, environment variables etc'

(*) Another option that is more suitable for debugging but worth mentioning is to check in revision history of your rollout:

$ kubectl rollout history deployment my-dep
deployment.apps/my-dep
 
REVISION  CHANGE-CAUSE
2         <none>
4         <none>
5         <none>
6         <none>
11        <none>
12        <none>

And then returning to the previous revision by running:

 $kubectl rollout undo deployment my-dep --to-revision=11

And then returning back to the new one.

(**) The CHANGE-CAUSE is <none> because you should run the updates with the --record flag - like mentioned here:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record

(***) There is a discussion regarding deprecating this flag.