I have a single node kubernetes cluster running. Everything working fine, but when I run the "kubectl get cs" (kubectl get componentstatus) it showing two instance of etcd. I have running a single etcd instance.
[root@master01 vagrant]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@master01 vagrant]# etcdctl member list
19ef3eced66f4ae3: name=master01 peerURLs=http://10.0.0.10:2380 clientURLs=http://0.0.0.0:2379 isLeader=true
[root@master01 vagrant]# etcdctl cluster-health
member 19ef3eced66f4ae3 is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy
Etcd is running as a docker container. In the /etc/systemd/system/etcd.service file single etcd cluster is mentioned.(http://10.0.0.10:2380)
/usr/local/bin/etcd \
--name master01 \
--data-dir /etcd-data \
--listen-client-urls http://0.0.0.0:2379 \
--advertise-client-urls http://0.0.0.0:2379 \
--listen-peer-urls http://0.0.0.0:2380 \
--initial-advertise-peer-urls http://10.0.0.10:2380 \
--initial-cluster master01=http://10.0.0.10:2380 \
--initial-cluster-token my-token \
--initial-cluster-state new \
Also in the api server config file /etc/kubernetes/manifests/api-srv.yaml --etcd-servers flag is used.
- --etcd-servers=http://10.0.0.10:2379,
[root@master01 manifests]# netstat -ntulp |grep etcd
tcp6 0 0 :::2379 :::* LISTEN 31109/etcd
tcp6 0 0 :::2380 :::* LISTEN 31109/etcd
Any one know why it showing etcd-0 and etcd-1 in "kubectl get cs" ?. Any help is appreciated.
kubectl get all --all-namespaces -o wide- Vit- --etcd-servers=http://10.0.0.10:2379,As far I understand, the comma was adding a new etcd server record with localhost and default port 2379. And it was trying to check the health on http://:::2379. After removing the comma, "kubectl get cs" showing one etcd instance. - Jyothish Kumar S