I try to setup a haproxy'd multi-master node setup for Kubernetes, as described in [1]. My network configurations are:
- haproxy = 192.168.1.213
- master0|1|2 = 192.168.1.210|211|212
- worker0|1|2 = 192.168.1.220|221|222 (not interesting at this point)
all hosts are able to connect to each other (DNS is resolved for each node). Each node is running Ubuntu 18.04.3 (LTS). Docker is installed as
- docker.io/bionic-updates,bionic-security,now 18.09.7-0ubuntu1~18.04.4 amd64 [installed]
Kubernetes packages currently installed are
- kubeadm/kubernetes-xenial,now 1.16.3-00 amd64 [installed]
- kubectl/kubernetes-xenial,now 1.16.3-00 amd64 [installed]
- kubelet/kubernetes-xenial,now 1.16.3-00 amd64 [installed,automatic]
- kubernetes-cni/kubernetes-xenial,now 0.7.5-00 amd64 [installed,automatic]
using an additional repository as described in [2] (i'm aware that i've installed bionic
on my VMs, but the "newest" repo available is still xenial
).
My haproxy is installed as haproxy/bionic,now 2.0.9-1ppa1~bionic amd64 [installed]
from [3] repository.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
retries 2
timeout connect 3000ms
timeout client 5000ms
timeout server 5000ms
frontend kubernetes
bind *:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server master0 192.168.1.210:6443 check fall 3 rise 2
server master1 192.168.1.211:6443 check fall 3 rise 2
server master2 192.168.1.212:6443 check fall 3 rise 2
While trying to setup my first control plane, running kubeadm init --control-plane-endpoint "haproxy.my.lan:6443" --upload-certs -v=6
as described in [4] results in this error:
Error writing Crisocket information for the control-plane node
full log in [5]. I'm pretty lost, if there's a mistake in my haproxy configuration or if there might be some fault in docker or kubernetes itself.
My /etc/docker/daemon.json
looks like this:
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
- [1] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
- [2] https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-using-native-package-management
- [3] https://launchpad.net/~vbernat/+archive/ubuntu/haproxy-2.0
- [4] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#stacked-control-plane-and-etcd-nodes
- [5] https://pastebin.com/QD5mbiyN
sudo systemctl enable docker
,sudo systemctl enable kubelet
,systemctl daemon-reload
,systemctl restart docker
and reset iptables? 3. What output you get when use--ignore-preflight-errors=all
flag? 4. Did you trykubeadm init
on other nodes? – PjoterS