0
votes

Question

Flanneld on a Kubernetes worker node has the configuration file /etc/sysconfig/flanneld which points to the ETCD on localhost of the worker node, which should point to the master node etcd URL.

Does this mean the Pod network has not been appropriately configured or Flannel with Kubernetes users different configuration files? If so which configuration does flanneld uses?

Also if there are good references/resources relating how Kubernetes interacts with CNI, kindly suggest.

On the worker node, the configuration points to its self, instead of master IP.

$ cat /etc/sysconfig/flanneld  

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

Worker nodes successfully joined.

$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master    Ready     master    25m       v1.8.5
node01    Ready     <none>    25m       v1.8.5
node02    Ready     <none>    25m       v1.8.5

The flannel.1 IF on the worker node is configured with the save CIDR with master, although the configuration does not point to the master where Flannel was configured.

$ ip addr
...
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:0d:f8:34 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.12/24 brd 192.168.99.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::6839:cd66:9352:2280/64 scope link 
       valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
    link/ether 52:54:00:2c:56:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
    link/ether 52:54:00:2c:56:b8 brd ff:ff:ff:ff:ff:ff
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 02:42:67:48:ae:ef brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
7: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN 
    link/ether 56:20:a1:4d:f0:d2 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.0/32 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::5420:a1ff:fe4d:f0d2/64 scope link 
       valid_lft forever preferred_lft forever

The step executed on the worker (besides sudo yum install kubelet kubeadm flanneld) is kubeadm join which looks succeeded (although a few error messages).

changed: [192.168.99.12] => {...
  "[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.",
  "[preflight] Running pre-flight checks",
  "[preflight] Starting the kubelet service",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Failed to connect to API Server \"192.168.99.10:6443\": there is no JWS signed token in the cluster-info ConfigMap. This token id \"7ae0ed\" is invalid for this cluster, can't connect",
  "[discovery] Trying to connect to API Server \"192.168.99.10:6443\"",
  "[discovery] Created cluster-info discovery client, requesting info from \"https://192.168.99.10:6443\"",
  "[discovery] Requesting info from \"https://192.168.99.10:6443\" again to validate TLS against the pinned public key",
  "[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server \"192.168.99.10:6443\"",
  "[discovery] Successfully established connection with API Server \"192.168.99.10:6443\"",
  "[bootstrap] Detected server version: v1.8.5",
  "[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)",
  "",
  "Node join complete:",
  "* Certificate signing request sent to master and response",
  "  received.",
  "* Kubelet informed of new secure connection details.",
  "",
  "Run 'kubectl get nodes' on the master to see this machine join."

Background

Installed Kubernetes 1.8.5 by following Using kubeadm to Create a Cluster in CentOS 7 on VirtualBox.

Related

1

1 Answers

1
votes

Flannel configuration is stored in etcd. FLANNEL_ETCD_ENDPOINTS="http://127.0.0.1:2379" parameter defines where etcd is located, FLANNEL_ETCD_PREFIX="/atomic.io/network" defines where data is stored in etcd.

So, to get flannel configuration exactly for your case, we need to get those info from etcd:

etcdctl --endpoint=127.0.0.1:2379 get /atomic.io/network/config
{"Network":"10.2.0.0/16","Backend":{"Type":"vxlan"}}

Also, we can find how many subnets we use in our cluster:

etcdctl --endpoint=127.0.0.1:2379 ls /atomic.io/network/subnets
/atomic.io/network/subnets/10.2.41.0-24
/atomic.io/network/subnets/10.2.86.0-24

And check information about any of them:

etcdctl --endpoint=127.0.0.1:2379 get /atomic.com/network/subnets/10.2.4.0-24
{"PublicIP":"10.0.0.16","BackendType":"vxlan","BackendData":{"VtepMAC":"45:e7:76:d5:1c:49"}}