1
votes

I am having tiller panic crash as detailed in helm FAQ (https://docs.helm.sh/using_helm/) under the question
Q: Tiller crashes with a panic:
panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation...

The FAQ answer suggests

To fix this, you will need to change your Kubernetes configuration. Make sure that --service-account-private-key-file from controller-manager and --service-account-key-file from apiserver point to the same x509 RSA key.

I've tried to search online and read the docs at (https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/) which states

You must pass a service account private key file to the token controller in the controller-manager by using the --service-account-private-key-file option. The private key will be used to sign generated service account tokens. Similarly, you must pass the corresponding public key to the kube-apiserver using the --service-account-key-file option. The public key will be used to verify the tokens during authentication.

and the docs at https://kubernetes.io/docs/reference/access-authn-authz/authentication/
The docs explain concepts well, but no specifics on actually changing the config.

How do I change my Kubernetes configuration as the FAQ answer suggests?

Make sure that --service-account-private-key-file from controller-manager and --service-account-key-file from apiserver point to the same x509 RSA key.

Details:
using kops and gossip based k8s cluster

1

1 Answers

0
votes

I have found through trial and error that helm ls seems to have a bug for my setup, or maybe helm ls is expected to show the error if there are no releases to be shown.

Right after helm init, helm ls would show the error panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation...,
but if you just went ahead and used helm,
e.g. helm install stable/<package>, the chart is deployed successfully.

And after the charts are deployed, calling helm ls no longer returns the error, and correctly shows the deployed charts (releases).
I have confirmed it using kubectl get pods, kubectl get services and kubectl get deployments, the released objects are all running.