I am trying to setup kubernetes cluster in AWS using KOPS. I configured for 3 master nodes and 6 nodes. But after launching the cluster only two master nodes are up.
I am using .k8s.local
DNS instead of Purchased DNS. Below is the script that I am using for creating the cluster.
kops create cluster \
--cloud=aws \
--name=kops-cassandra-cluster-01.k8s.local \
--zones=ap-south-1a,ap-south-1b,ap-south-1c \
--master-size="t2.small" \
--master-count 3 \
--master-zones=ap-south-1a,ap-south-1b,ap-south-1c \
--node-size="t2.small" \
--ssh-public-key="kops-cassandra-cluster-01.pub" \
--state=s3://kops-cassandra-cluster-01 \
--node-count=6
After executing kops update cluster --name=kops-cassandra-cluster-01.k8s.local --state=s3://kops-cassandra-cluster-01 --yes
only two master nodes are available instead of 3.
kubectl get nodes
shows:
NAME STATUS ROLES AGE VERSION
ip-172-20-44-37.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-52-78.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-60-234.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-61-141.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-66-215.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-69-124.ap-south-1.compute.internal Ready master 18m v1.12.8
ip-172-20-85-58.ap-south-1.compute.internal Ready node 18m v1.12.8
ip-172-20-90-119.ap-south-1.compute.internal Ready node 18m v1.12.8
I am new to Kubernetes. Am I missing something?
kubectl get nodes
shows? paste its output in question. – mchawre-v
option. Check this github.com/kubernetes/kops/blob/master/docs/cli/kops.md for more info around log options. – mchawre