7
votes

Using helm for deploying chart on my Kubernetes cluster, since one day, I can't deploy a new one or upgrading one existed.

Indeed, each time I am using helm I have an error message telling me that it is not possible to install or upgrade ressources.

If I run helm install --name foo . -f values.yaml --namespace foo-namespace I have this output:

Error: release foo failed: the server could not find the requested resource

If I run helm upgrade --install foo . -f values.yaml --namespace foo-namespace or helm upgrade foo . -f values.yaml --namespace foo-namespace I have this error:

Error: UPGRADE FAILED: "foo" has no deployed releases

I don't really understand why.

This is my helm version:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

On my kubernetes cluster I have tiller deployed with the same version, when I run kubectl describe pods tiller-deploy-84b... -n kube-system:

Name:               tiller-deploy-84b8...
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-worker-1/167.114.249.216
Start Time:         Tue, 26 Feb 2019 10:50:21 +0100
Labels:             app=helm
                    name=tiller
                    pod-template-hash=84b...
Annotations:        <none>
Status:             Running
IP:                 <IP_NUMBER>
Controlled By:      ReplicaSet/tiller-deploy-84b8...
Containers:
  tiller:
    Container ID:   docker://0302f9957d5d83db22...
    Image:          gcr.io/kubernetes-helm/tiller:v2.12.3
    Image ID:       docker-pullable://gcr.io/kubernetes-helm/tiller@sha256:cab750b402d24d...
    Ports:          44134/TCP, 44135/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Tue, 26 Feb 2019 10:50:28 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from helm-token-... (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  helm-token-...:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  helm-token-...
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  26m   default-scheduler      Successfully assigned kube-system/tiller-deploy-84b86cbc59-kxjqv to worker-1
  Normal  Pulling    26m   kubelet, k8s-worker-1  pulling image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Pulled     26m   kubelet, k8s-worker-1  Successfully pulled image "gcr.io/kubernetes-helm/tiller:v2.12.3"
  Normal  Created    26m   kubelet, k8s-worker-1  Created container
  Normal  Started    26m   kubelet, k8s-worker-1  Started container

Is someone have faced the same issue ?


Update:

This the folder structure of my actual chart named foo: structure folder of the chart:

> templates/
  > deployment.yaml 
  > ingress.yaml
  > service.yaml
> .helmignore
> Chart.yaml 
> values.yaml

I have already tried to delete the chart in failure using the delete command helm del --purge foo but the same errors occurred.

Just to be more precise, the chart foo is in fact a custom chart using my own private registry. ImagePullSecret are normally setting up.

I have run these two commands helm upgrade foo . -f values.yaml --namespace foo-namespace --force | helm upgrade --install foo . -f values.yaml --namespace foo-namespace --force and I still get an error:

UPGRADE FAILED
ROLLING BACK
Error: failed to create resource: the server could not find the requested resource
Error: UPGRADE FAILED: failed to create resource: the server could not find the requested resource

Notice that foo-namespace already exist. So the error don't come from the namespace name or the namespace itself. Indeed, if I run helm list, I can see that the foo chart is in a FAILED status.

4
helm del --purge foo is a little bit of a blunt hammer, but it makes Helm actually forget about everything it knows about the release and sometimes helps this kind of problem.David Maze
can you show the directory structure from where you are trying to run $ helm install ... or $ helm upgrage ...?Shudipta Sharma
Does foo-namespace namespace exist?Anna Slastnikova
@AnnaSlastnikova yes, the foo namespace exist, the error does not come from this part (typo or anything about the namespace itself)french_dev

4 Answers

9
votes

Tiller stores all releases as ConfigMaps in Tiller's namespace(kube-system in your case). Try to find broken release and delete it's ConfigMap using commands:

$ kubectl get cm --all-namespaces -l OWNER=TILLER
NAMESPACE     NAME               DATA   AGE
kube-system   nginx-ingress.v1   1      22h

$ kubectl delete cm  nginx-ingress.v1 -n kube-system

Next, delete all release objects (deployment,services,ingress, etc) manually and reinstall release using helm again.

If it didn't help, you may try to download newer release of Helm (v2.14.3 at the moment) and update/reinstall Tiller.

3
votes

I had the same issue, but cleanup did not help also try the same helm chart on a brand new k8s cluster did not help.

So I found out that there was a missing apiVersion caused the problem. I found it out by doing a

helm install xyz --dry-run

copy the output to a new test.yaml file and use

kubectl apply test.yaml

there I see the error (apiVersion line was moved to a comment line)

1
votes

I had the same problem but not due to broken releases. After upgrading helm. It seems newer versions of helm do bad with the --wait parameter. So for anyone facing the same issue: Just removing --wait, and leave --debug from helm upgrade parameters solved my issue.

0
votes

I had this issue when I tried to deploy custom chart with CronJob instead deployment. The error occurs on this step in deploy script. To resolve it need to add ENV Variable ROLLOUT_STATUS_DISABLED=true it is solved in this issue.