0
votes

I'm using Kubeadm to create a cluster of 3 nodes

  • One Master
  • Two Workers

I'm using weave as the network pod

The status of my cluster is this:

NAME         STATUS   ROLES    AGE   VERSION
darthvader   Ready    <none>   56m   v1.12.3
jarjar       Ready    master   60m   v1.12.3
palpatine    Ready    <none>   55m   v1.12.3

And I tried to init helm and tiller in my cluster

helm init

The result was this:

$HELM_HOME has been configured at /home/ubuntu/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

And the status of my pods is this:

NAME                             READY   STATUS              RESTARTS   AGE
coredns-576cbf47c7-8q6j7         1/1     Running             0          54m
coredns-576cbf47c7-kkvd8         1/1     Running             0          54m
etcd-jarjar                      1/1     Running             0          54m
kube-apiserver-jarjar            1/1     Running             0          54m
kube-controller-manager-jarjar   1/1     Running             0          53m
kube-proxy-2lwgd                 1/1     Running             0          49m
kube-proxy-jxwqq                 1/1     Running             0          54m
kube-proxy-mv7vh                 1/1     Running             0          50m
kube-scheduler-jarjar            1/1     Running             0          54m
tiller-deploy-845cffcd48-bqnht   0/1     ContainerCreating   0          12m
weave-net-5h5hw                  2/2     Running             0          51m
weave-net-jv68s                  2/2     Running             0          50m
weave-net-vsg2f                  2/2     Running             0          49m

The problem is that tiller is stuck in ContainerCreating State.

And I ran

kubectl describe pod tiller-deploy -n kube-system

To check the status of tiller and I found The Next error:

Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Pod sandbox changed, it will be killed and re-created.

How I can to create the tiller deploy pod successfully? I don't understand why the pod sandbox is failing.

2

2 Answers

0
votes

Maybe the problem is in the way you deployed Tiller. I just recreated this and had no issues using Weave and Compute Engine instances on GCP.

You should retry with different method of installing helm as maybe there was some issue (you did not provide details on how did you install it).

Reset helm and delete tiller pod:

helm reset --force(if the tiller persists check the name of the replicaset with tiller kubectl get all --all-namespaces and kubectl delete rs/name) Now try deploying helm and tiller using different method. For example running it through the script: As explained here.

You can also run Helm without Tiller.

0
votes

It looks like you are running into this.

Most likely your node cannot pull the container image because of a networking connectivity problem. Something image like this: gcr.io/kubernetes-helm/tiller:v2.3.1 or the pause container gcr.io/google_containers/pause (unlikely if your other pods are running). You can try logging into your nodes (darthvader, palpatine) and manually debug with:

$ docker pull gcr.io/kubernetes-helm/tiller:v2.3.1 <= Use the version on your tiller pod spec or deployment (tiller-deploy)
$ docker pull gcr.io/google_containers/pause