I have a Kubernetes cluster running in AWS. I used kops
to setup and start the cluster.
I defined a minimum and maximum number of nodes in the nodes instance group:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2017-07-03T15:37:59Z
labels:
kops.k8s.io/cluster: k8s.tst.test-cluster.com
name: nodes
spec:
image: kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-05-02
machineType: t2.large
maxSize: 7
minSize: 5
role: Node
subnets:
- eu-central-1b
Currently the cluster has 5 nodes running. After some deployments in the cluster, pods/containers cannot start because there are no nodes available with enough resources.
So I thought, when there is a resource problem, k8s scales automatically the cluster and start more nodes. Because the maximum number of nodes is 7.
Do I miss any configuration?
UPDATE
As @kichik mentioned, the autoscaler addon is already installed. Nevertheless, it doesn't work. Kube-dns is also often restarting because of resource problems.