2
votes

I'd like to manually force Kubernetes to scale up to a specific count of instances.

THERE IS NO DEPLOYMENT DEFINED - this means no replicas. I have a set of jobs and their related pending pods.

How can this be done?

I do have the cluster autoscaler running if required.

Why:

Because the cluster autoscaler is not sufficient to scale up a new node in the event a pod is pending, in particular configurations. (Kubernetes autoscaler - NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added))

3
As I understand it, Kubernetes yaml files define the desired cluster state/countdjb
How did you create your cluster? If the node pool is in an austoscaling group, increase the size of that group to the desired number of nodes.erk
@Chris Could you please tell us where is your cluster running?Wytrzymały Wiktor
AWS, I was looking for a way to test the scaling functionality directly via the cluster autoscaler. Even though it would work indirectly by setting the instance count via the infrastructure side.Chris Stryczynski

3 Answers

0
votes

You can patch the deployment. The following patches the deployment 'MY_DEPLOYMENT' to 2 replicas kubectl patch deployment MY_DEPLOYMENT -p '{"spec":{"replicas": '2'}}'

0
votes

You can always define replicas in your deployment. Using replicas you can manually scale-up. following .yaml file is an example

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google_samples/gb-frontend:v3
0
votes

If I understand you correctly this actually can be done.

Create a simple and lightweight deployment with pod affinity and scale it up manually when you need more nodes. That way pod will not be scheduled on the node if there is the same pod scheduled already. It will remain in the pending state what makes cluster autoscaler create another node for it.

Here are some useful links:

Please let me know if that helped.