35
votes

I am getting a couple of errors with Helm that I can not find explanations for elsewhere. The two errors are below.

Error: no available release name found
Error: the server does not allow access to the requested resource (get configmaps)

Further details of the two errors are in the code block further below.

I have installed a Kubernetes cluster on Ubuntu 16.04. I have a Master (K8SMST01) and two nodes (K8SN01 & K8SN02).

This was created using kubeadm using Weave network for 1.6+.

Everything seems to run perfectly well as far as Deployments, Services, Pods, etc... DNS seems to work fine, meaning pods can access services using the DNS name (myservicename.default).

Using "helm create" and "helm search" work, but interacting with the tiller deployment do not seem to work. Tiller is installed and running according to the Helm install documentation.

root@K8SMST01:/home/blah/charts# helm version

Client: &version.Version{SemVer:"v2.3.0", 
GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.3.0", GitCommit:"d83c245fc324117885ed83afc90ac74afed271b4", GitTreeState:"clean"}

root@K8SMST01:/home/blah/charts# helm install ./mychart

Error: no available release name found

root@K8SMST01:/home/blah/charts# helm ls

Error: the server does not allow access to the requested resource (get configmaps)

Here are the running pods:

root@K8SMST01:/home/blah/charts# kubectl get pods -n kube-system -o wide
NAME                                      READY     STATUS    RESTARTS   AGE       IP             NODE
etcd-k8smst01                             1/1       Running   4          1d        10.139.75.19   k8smst01
kube-apiserver-k8smst01                   1/1       Running   3          19h       10.139.75.19   k8smst01
kube-controller-manager-k8smst01          1/1       Running   2          1d        10.139.75.19   k8smst01
kube-dns-3913472980-dm661                 3/3       Running   6          1d        10.32.0.2      k8smst01
kube-proxy-56nzd                          1/1       Running   2          1d        10.139.75.19   k8smst01
kube-proxy-7hflb                          1/1       Running   1          1d        10.139.75.20   k8sn01
kube-proxy-nbc4c                          1/1       Running   1          1d        10.139.75.21   k8sn02
kube-scheduler-k8smst01                   1/1       Running   3          1d        10.139.75.19   k8smst01
tiller-deploy-1172528075-x3d82            1/1       Running   0          22m       10.44.0.3      k8sn01
weave-net-45335                           2/2       Running   2          1d        10.139.75.21   k8sn02
weave-net-7j45p                           2/2       Running   2          1d        10.139.75.20   k8sn01
weave-net-h279l                           2/2       Running   5          1d        10.139.75.19   k8smst01
9
@PatrickHund I don't think so. I think Helm questions are valid here. Kubernetes community uses Stack Overflow.Ahmet Alp Balkan

9 Answers

26
votes

I think it's an RBAC issue. It seems that helm isn't ready for 1.6.1's RBAC.

There is a issue open for this on Helm's Github.

https://github.com/kubernetes/helm/issues/2224

"When installing a cluster for the first time using kubeadm v1.6.1, the initialization defaults to setting up RBAC controlled access, which messes with permissions needed by Tiller to do installations, scan for installed components, and so on. helm init works without issue, but helm list, helm install, and so on all do not work, citing some missing permission or another."

A temporary work around has been suggest:

"We "disable" RBAC using the command kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts;"

But I can not speak for it's validity. The good news is that this is a known issue and work is being done to fix it. Hope this helps.

74
votes

The solution given by kujenga from the GitHub issue worked without any other modifications:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
11
votes

I had the same issue with the kubeadm setup on to CentOS 7.

Helm doesn't make a service account when you "helm init" and the default one doesn't have the permissions to read from the configmaps - so it will fail to be able to run a check to see if the deployment name it wants to use is unique.

This got me past it:

kubectl create clusterrolebinding add-on-cluster-admin \
    --clusterrole=cluster-admin \
    --serviceaccount=kube-system:default

But that is giving the default account tons of power, I just did this so I could get on with my work. Helm needs to add the creation of their own service account to the "helm init" code.

3
votes

All addons in the kubernetes use the "defaults" service account. So Helm also runs with "default" service account. You should provide permissions to it. Assign rolebindings to it.

For read-only permissions:

kubectl create rolebinding default-view --clusterrole=view \ --serviceaccount=kube-system:default --namespace=kube-system

For admin access: Eg: to install packages.

kubectl create clusterrolebinding add-on-cluster-admin \ --clusterrole=cluster-admin \ --serviceaccount=kube-system:default

You can also install tiller server in adifferent namespace using the below command.

  1. First create the namesapce
  2. Create the serviceaccount for the namespace
  3. install the tiller in this respective namespace using the below command.

helm init --tiller-namespace test-namespace

3
votes

This solution has worked for me: https://github.com/helm/helm/issues/3055#issuecomment-397296485

$ kubectl create serviceaccount --namespace kube-system tiller

$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

$ helm init --service-account tiller --upgrade

$ helm update repo

$ helm install stable/redis --version 3.3.5

But after that, something has changed ; I have to add --insecure-skip-tls-verify=true flag to my kubectl commands ! I don't know how to fix that knowing that I am interacting with a gcloud containers cluster.

2
votes

Per https://github.com/kubernetes/helm/issues/2224#issuecomment-356344286, the following commands resolved the error for me too:

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
2
votes

Per https://github.com/kubernetes/helm/issues/3055

helm init --service-account default

This worked for me when the RBAC (serviceaccount) commands didn't.

1
votes

It's an RBAC issue. You need to have a service account with a cluster-admin role. And you should pass this service account during HELM initialization.

For example, if you have created a service account with the name tiller, you heml command would look like the following.

helm init --service-account=tiller

I followed this blog to resolve this issue. https://scriptcrunch.com/helm-error-no-available-release/

0
votes

check the logs for your tiller container:

kubectl logs tiller-deploy-XXXX --namespace=kube-system

if you found something like this:

Error: 'dial tcp 10.44.0.16:3000: connect: no route to host'

Then probably a firewall/iptables as described here solution is to remove some rules:

sudo iptables -D  INPUT -j REJECT --reject-with icmp-host-prohibited
sudo iptables -D  FORWARD -j REJECT --reject-with icmp-host-prohibited