0
votes

I have a project that needs to use mutating Webhook, based on namespaceselector, which needs to add specific label to namespace first.

I used three hooks, hook1(pre-install, pre-delete,etc) to create RBAC for hook2(pre-install) to add a label to namespace by Job and hook3(pre-delete) to remove the label by Job Hook content is as follows:

Hook1 to set permissions

# RBAC.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ns-edit
  namespace: kube-system
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade,pre-delete
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ns-edit
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade,pre-delete
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["get", "watch", "list","update","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: edit-ns
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade,pre-delete
    "helm.sh/hook-weight": "-5"
    "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ns-edit
subjects:
- kind: ServiceAccount
  name: ns-edit
  namespace: kube-system

hook2 to add label to namespace

# label-ns.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: label-ns
  namespace: kube-system
  annotations:
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "0"
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      containers:
      - name: labeler
        image: gcr.io/google_containers/hyperkube:v1.9.7
        command:
        - kubectl
        - label
        - ns
        - kube-system
        - mutating=disabled
        - --overwrite
      restartPolicy: Never
      serviceAccountName: ns-edit

hook3 to delete label to namespace from hook2

# delete-ns-label.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: del-ns-label
  namespace: kube-system
  annotations:
    "helm.sh/hook": pre-delete
    "helm.sh/hook-weight": "0"
    "helm.sh/hook-delete-policy": hook-succeeded
spec:
  template:
    spec:
      containers:
      - name: labeler
        image: gcr.io/google_containers/hyperkube:v1.9.7
        command:
        - kubectl
        - label
        - ns
        - kube-system
        - mutating-
      restartPolicy: Never
      serviceAccountName: ns-edit

During chart deployment, both hook2 and hook3 jobs were triggered and not complete due to not found serviceaccount(ns-edit).

helm install mutating-webhook mutating-webhook-0.1.0.tgz --debug
client.go:254: [debug] Starting delete for "ns-edit" ServiceAccount
client.go:283: [debug] serviceaccounts "ns-edit" not found
client.go:108: [debug] creating 1 resource(s)
client.go:254: [debug] Starting delete for "ns-edit" ClusterRole
client.go:283: [debug] clusterroles.rbac.authorization.k8s.io "ns-edit" not found
client.go:108: [debug] creating 1 resource(s)
client.go:254: [debug] Starting delete for "edit-ns" ClusterRoleBinding
client.go:283: [debug] clusterrolebindings.rbac.authorization.k8s.io "edit-ns" not found
client.go:108: [debug] creating 1 resource(s)
client.go:108: [debug] creating 1 resource(s)
client.go:463: [debug] Watching for changes to Job label-ns with timeout of 5m0s
client.go:491: [debug] Add/Modify event for label-ns: ADDED
client.go:530: [debug] label-ns: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
client.go:491: [debug] Add/Modify event for label-ns: MODIFIED
client.go:530: [debug] label-ns: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:491: [debug] Add/Modify event for label-ns: MODIFIED
client.go:254: [debug] Starting delete for "ns-edit" ServiceAccount
client.go:254: [debug] Starting delete for "ns-edit" ClusterRole
client.go:254: [debug] Starting delete for "edit-ns" ClusterRoleBinding
client.go:254: [debug] Starting delete for "label-ns" Job
client.go:108: [debug] creating 10 resource(s)
client.go:254: [debug] Starting delete for "ns-edit" ServiceAccount
client.go:108: [debug] creating 1 resource(s)
client.go:254: [debug] Starting delete for "ns-edit" ClusterRole
client.go:108: [debug] creating 1 resource(s)
client.go:254: [debug] Starting delete for "edit-ns" ClusterRoleBinding
client.go:108: [debug] creating 1 resource(s)
client.go:254: [debug] Starting delete for "ns-edit" ServiceAccount
client.go:254: [debug] Starting delete for "ns-edit" ClusterRole
client.go:254: [debug] Starting delete for "edit-ns" ClusterRoleBinding

However, it can be executed correctly in helm2, that is, hook1 and hook2 are triggered at helm install to add namespace label, and hook1 and hook3 are triggered at helm delete --purge to delete label added at hook2

Why helm2 and helm3 have such huge differece on Hook.

May I ask how to modify to achieve the unification of the two? If not, how to design in helm3

I really appreciate any help with this.