0
votes

I'm using Jenkins setup on GKE installed via standard Helm chart. My builds are consistently failing which I'm trying to troubleshoot, but in addition to that a new slave pod is created on every build attempt (with pod name like jenkins-slave-3wsb7). Almost all of them go to a Completed state after build fails, and then the pod lingers in my GKE dash and in list of pods from kubectl get pods. I currently have 80+ pods showing as a result.

Is this expected behavior? Is there a work around to clean up old Completed pods?

Thanks.

2

2 Answers

1
votes

For the workaround to clean up completed pods :

kubectl delete pod NAME --grace-period=0 --force
0
votes

If you are using Kubernetes 1.12 or later. The ttlSecondsAfterFinished Job spec was conveniently introduced. Note that it's 'alpha' in 1.12.

apiVersion: batch/v1
kind: Job
metadata:
  name: job-with-ttl
spec:
  ttlSecondsAfterFinished: 100 <====
  template:
    spec:
      containers:
      - name: myjob
        image: myimage
        command: ["run_some_batch_job"]
      restartPolicy: Never