0
votes

I created a single-node cluster on a Ubuntu 18.04 node on EC2, using kubeadm init. However I am unable to join (unable to connect to the API) from another node.

Note: this is an EC2 instance.

Kubectl is working fine on the master itself.

I used the following command where MASTER_PRIVATE_IP is 172.31.25.111.

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=${MASTER_PRIVATE_IP} --apiserver-cert-extra-sans=${MASTER_PUBLIC_IP}

When I try to join a second node on the same private network to the cluster with kubeadm join it just times out. I can ssh to the master no problem and when performing a netstat on the master I see that it only seems to be listening to port 6443 on ipv6 addresses - why? I provided the private IPv4 address as the advertise address. (kubeconfig has htat private ipv4 address of course).

kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.25.111 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
netstat -tulpn | grep -E ":(22|6443)" | grep LISTEN
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -
tcp6       0      0 :::22                   :::*                    LISTEN      -
tcp6       0      0 :::6443                 :::*                    LISTEN      -

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=${MASTER_PRIVATE_IP} --apiserver-cert-extra-sans=${MASTER_PUBLIC_IP}

Any ideas?

2
it is probably because of missing/wrong security groups on master instance - Markownikow
Thanks a lot, that was indeed the case, I'd forgotten to add this when adding the 6443 port into my config - one terraform apply later and kubectl is working fine from the worker nodes. - mjbright
I'm still a bit surprised not to see anything, i.e. kubeapi-server, listed as listening on port 6443 for ipv4 in the netstat output above, only ipv6. Anyway it works now ... thanks. - mjbright

2 Answers

1
votes

add security group to Master ec2, for instance this way

Port range: 0 - 6555 or just 6443

source ip 172.31.0.0/16

0
votes

To add to @Markownikow's answers, in this instance, your issue appears to be limited to a missing security group. I was able to reproduce this on my end and adding the above rule allowed worker to reach master on port 6443. For the sake of completion, below are a few additional checks you could do:

  • Make sure both master and worker are in the same VPC and subnet.
  • If master and worker are in different subnets, make sure appropriate rules are in place to allow both instances talk to each other.