5
votes

We used to deploy kubernetes resources using the normal kubectl command for services, deployments, configmap...etc. Now we need to start using Helm 3 and integrate it to our pipelines, but when I try to run the helm upgrade command, it's giving the below error: Error: rendered manifests contain a resource that already exists. Unable to continue with install: existing resource conflict: namespace: default

So, these resources were never created with helm as it was created normally with kubcetl apply command.

Just need to know how to used Helm in the pipeline without re-create the k8s resources. As the only workaround I found to get it working was to delete the resources and re-deploy them using Helm.

Below is the command I tried: helm upgrade --atomic --debug --install --force test .

Thanks, Aly

3

3 Answers

13
votes

see this feature of helm3 Adopt resources into release with correct instance and managed-by labels

Helm will no longer error when attempting to create a resource that already exists in the target cluster if the existing resource has the correct meta.helm.sh/release-name and meta.helm.sh/release-namespace annotations, and matches the label selector app.kubernetes.io/managed-by=Helm. This facilitates zero-downtime migrations to Helm 3 for managing existing deployments, and allows Helm to "adopt" existing resources that it previously created.

In order to allow an existing resource to be adopted by Helm, add release metadata and the managed-by label:

KIND=deployment
NAME=my-app-staging
RELEASE=staging
NAMESPACE=default
kubectl annotate $KIND $NAME meta.helm.sh/release-name=$RELEASE
kubectl annotate $KIND $NAME meta.helm.sh/release-namespace=$NAMESPACE
kubectl label $KIND $NAME app.kubernetes.io/managed-by=Helm
2
votes

Honestly while FL3SH's answer is what you are looking for... the best choice would be to just delete your k8s resources. There are some exceptions to this:

  1. Your helm chart is trying to create a namespace (e.g. default)
  2. Your deployments cannot be down for any time
  3. Your helm chart has persistent volume claims
0
votes

You could add all helm labels/annotations. You can check all helm labels and anther components with helm template. Then you could use kubectl label or kubectl annotate to add missing labels/annotations.

I personally never tried it, because is too much work and in the end, you have to recreate pods with new labels if they are managed by deployment/statefulset.