I have 6 replicas of a pod running which I would like to restart\recreate every 5 minutes.
This needs to be a rolling update - so that all are not terminated at once and there is no downtime. How do I achieve this?
I tried using cron job, but seems not to be working :
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: scheduled-pods-recreate
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
containers:
- name: ja-engine
image: app-image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
Although the job was created successfully and scheduled as per description below, it seems to have never run:
Name: scheduled-pods-recreate
Namespace: jk-test
Labels: <none>
Annotations: <none>
Schedule: */5 * * * *
Concurrency Policy: Forbid
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
ja-engine:
Image: image_url
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Tue, 19 Feb 2019 10:10:00 +0100
Active Jobs: scheduled-pods-recreate-1550567400
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 23m cronjob-controller Created job scheduled-pods-recreate-1550567400
So first thing, how do I ensure that it is running so the pods are recreated?
Also how can I ensure no downtime?
The updated version of the cronjob:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
The pods are not starting with the message Back-off restarting failed container and error as given below:
State: Terminated
Reason: Error
Exit Code: 127