2
votes

I try to setup a haproxy'd multi-master node setup for Kubernetes, as described in [1]. My network configurations are:

  • haproxy = 192.168.1.213
  • master0|1|2 = 192.168.1.210|211|212
  • worker0|1|2 = 192.168.1.220|221|222 (not interesting at this point)

all hosts are able to connect to each other (DNS is resolved for each node). Each node is running Ubuntu 18.04.3 (LTS). Docker is installed as

  • docker.io/bionic-updates,bionic-security,now 18.09.7-0ubuntu1~18.04.4 amd64 [installed]

Kubernetes packages currently installed are

  • kubeadm/kubernetes-xenial,now 1.16.3-00 amd64 [installed]
  • kubectl/kubernetes-xenial,now 1.16.3-00 amd64 [installed]
  • kubelet/kubernetes-xenial,now 1.16.3-00 amd64 [installed,automatic]
  • kubernetes-cni/kubernetes-xenial,now 0.7.5-00 amd64 [installed,automatic]

using an additional repository as described in [2] (i'm aware that i've installed bionic on my VMs, but the "newest" repo available is still xenial).

My haproxy is installed as haproxy/bionic,now 2.0.9-1ppa1~bionic amd64 [installed] from [3] repository.

global
    log /dev/log        local0
    log /dev/log        local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode http
    retries 2
    timeout connect 3000ms
    timeout client  5000ms
    timeout server  5000ms

frontend kubernetes
    bind        *:6443
    option      tcplog
    mode        tcp
    default_backend kubernetes-master-nodes

backend kubernetes-master-nodes
    mode    tcp
    balance roundrobin
    option  tcp-check
    server  master0 192.168.1.210:6443 check fall 3 rise 2
    server  master1 192.168.1.211:6443 check fall 3 rise 2
    server  master2 192.168.1.212:6443 check fall 3 rise 2

While trying to setup my first control plane, running kubeadm init --control-plane-endpoint "haproxy.my.lan:6443" --upload-certs -v=6 as described in [4] results in this error:

Error writing Crisocket information for the control-plane node

full log in [5]. I'm pretty lost, if there's a mistake in my haproxy configuration or if there might be some fault in docker or kubernetes itself.

My /etc/docker/daemon.json looks like this:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
1
1. Could you specify what --cgroup-driver are you using? 2. Have yout tried to turn th swat and reset kubeadm, execute commands like sudo systemctl enable docker, sudo systemctl enable kubelet, systemctl daemon-reload, systemctl restart docker and reset iptables? 3. What output you get when use --ignore-preflight-errors=all flag? 4. Did you try kubeadm init on other nodes?PjoterS
Hi @PjoterS, i added the daemon.json to my original post, using "systemd" here, 2.) what exactly do you mean? 3.) The error log stays the same. 4.) I tried to do this on my second master node - same behaviour as on the first node.Robert Heine

1 Answers

1
votes

While not being able to find a decent solution and created an issue in the original "kubeadm" project at github, see here: https://github.com/kubernetes/kubeadm/issues/1930 .

Since the "triage" suggested in the issue was not feasable (Ubuntu is pretty much "set") for me, I ended in setting up another Docker distribution, as described here: https://docs.docker.com/install/linux/docker-ce/ubuntu/ , purging installed distribution before starting the new setup.

While running Docker (Community) v19.03.5 through kubeadm v1.16.3 throws the following warning:

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09

the results are pretty fine, I managed to setup my ha cluster, as described in the original documentation.

So, this can be considered as a workaround, NOT as a solution to my original issue!