I am having a cluster on EKS with cluster autoscaler enabled. Lets assume there are 3 nodes node-1,node-2,node3. The nodes can have maximum of 10 pods each. Now when the 31st pod comes into picture the CA will launch a new node and schedule the pod on that. Now maybe lets say the 4 pods from node 2 are not required and they go down. Now according to the requirement if a new pod is launched the scheduler places the new pod on the 4th node (launched by the CA) and not on the second node. Also I want that going down further if the pods are removed from the nodes then the new pods should come into the already existing node and not in a new node put up by CA. I tried updating the EKS default scheduler config file using a scheduler plugin but am unable to do so.
I think we can create a second scheduler but I am not aware of the process properly. Any workaround or suggestions will help a lot.
This is the command: "kube-scheduler --config custom.config" and this is the error "attempting to acquire leader lease kube-system/kube-scheduler..."
This is my custom.config file
apiVersion: kubescheduler.config.k8s.io/v1beta1
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
kind: KubeSchedulerConfiguration
percentageOfNodesToScore: 100
profiles:
- schedulerName: kube-scheduler-new
plugins:
score:
disabled:
- name: '*'
enabled:
- name: NodeResourcesMostAllocated