I have a HorizontalPodAutoscalar to scale my pods based on CPU. The minReplicas here is set to 5
:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-web
minReplicas: 5
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
I've then added Cron jobs to scale up/down my horizontal pod autoscaler based on time of day:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: production
name: cron-runner
rules:
- apiGroups: ["autoscaling"]
resources: ["horizontalpodautoscalers"]
verbs: ["patch", "get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: cron-runner
namespace: production
subjects:
- kind: ServiceAccount
name: sa-cron-runner
namespace: production
roleRef:
kind: Role
name: cron-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-cron-runner
namespace: production
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-up-job
namespace: production
spec:
schedule: "56 11 * * 1-6"
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-up-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":8}}'
restartPolicy: OnFailure
----
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: django-scale-down-job
namespace: production
spec:
schedule: "30 20 * * 1-6"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0 # Remove after successful completion
failedJobsHistoryLimit: 1 # Retain failed so that we see it
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-cron-runner
containers:
- name: django-scale-down-job
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch hpa myapp-web --patch '{"spec":{"minReplicas":5}}'
restartPolicy: OnFailure
This works really well, except that now when I deploy it overwrites this minReplicas
value with the minReplicas in the HorizontalPodAutoscaler spec (in my case, this is set to 5)
I'm deploying my HPA using kubectl apply -f ~/autoscale.yaml
Is there a way of handling this situation? Do I need to create some kind of shared logic so that my deployment scripts can work out what the minReplicas value should be? Or is there a simpler way of handling this?
HPA
scaling of yours? What exactly you mean by overwriting theminReplicas
(as it should looking at your example)? You should createHPA
that would be encompassing both situation (high load/low load) in a singleHPA
basing on the available metrics. – Dawid Krukreplicas
in yourDeployment
(each newcreate
/apply
) you could try to update yourDeployment
with JSON merge patch. Is this what you are looking for? – Dawid KrukminReplicas
value that's overwriting the one set by the CronJob (as opposed to thereplicas
in the Deployment). I tried using JSON merge to deploy the HPA (kubectl patch -f autoscale.yaml --type=merge -p "$(cat autoscale.yaml)"
) and it didn't work. Any other ideas? – MDalt