1
votes

I have Gitlab (11.8.1) (self-hosted) connected to self-hosted K8s Cluster (1.13.4). There're 3 projects in gitlab name shipment, authentication_service and shipment_mobile_service.

All projects add the same K8s configuration exception project namespace.

The first project is successful when install Helm Tiller and Gitlab Runner in Gitlab UI.

The second and third projects only install Helm Tiller success, Gitlab Runner error with log in install runner pod:

 Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
+ sleep 1s
+ echo 'Retrying (30)...'
+ helm repo add runner https://charts.gitlab.io
Retrying (30)...
"runner" has been added to your repositories
+ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "runner" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈ 
+ helm upgrade runner runner/gitlab-runner --install --reset-values --tls --tls-ca-cert /data/helm/runner/config/ca.pem --tls-cert /data/helm/runner/config/cert.pem --tls-key /data/helm/runner/config/key.pem --version 0.2.0 --set 'rbac.create=true,rbac.enabled=true' --namespace gitlab-managed-apps -f /data/helm/runner/config/values.yaml
Error: UPGRADE FAILED: remote error: tls: bad certificate 

I don't config gitlab-ci with K8s cluster on first project, only setup for the second and third. The weird thing is with the same helm-data (only different by name), the second run success but the third is not.

And because there only one gitlab runner available (from the first project), I assign both 2nd and 3rd project to this runner.

I use this gitlab-ci.yml for both 2 projects with only different name in helm upgrade command.

stages:
  - test
  - build
  - deploy

variables:
  CONTAINER_IMAGE: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:${CI_PIPELINE_ID}
  CONTAINER_IMAGE_LATEST: dockerhub.linhnh.vn/${CI_PROJECT_PATH}:latest
  CI_REGISTRY: dockerhub.linhnh.vn
  DOCKER_DRIVER: overlay2
  DOCKER_HOST: tcp://localhost:2375 # required when use dind

# test phase and build phase using docker:dind success

deploy_beta:
  stage: deploy
  image: alpine/helm
  script:
    - echo "Deploy test start ..."
    - helm init --upgrade
    - helm upgrade --install --force shipment-mobile-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
    - echo "Deploy test completed!"
  environment:
    name: staging
  tags: ["kubernetes_beta"]
  only:
  - master

The helm-data is very simple so I think don't really need to paste here. Here is the log when second project deploy success:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image linkyard/docker-helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-15-concurrent-0x2bms to be running, status is Pending
Running on runner-xrmajzy2-project-15-concurrent-0x2bms via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning into '/root/authentication_service'...
Cloning repository...
Checking out 5068bf1f as master...
Skipping Git submodules setup
$ echo "Deploy start ...."
Deploy start ....
$ helm init --upgrade --dry-run --debug
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: gcr.io/kubernetes-helm/tiller:v2.13.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...
$ helm upgrade --install --force authentication-service --recreate-pods --set image.tag=${CI_PIPELINE_ID} ./helm-data
WARNING: Namespace "gitlab-managed-apps" doesn't match with previous. Release will be deployed to default
Release "authentication-service" has been upgraded. Happy Helming!
LAST DEPLOYED: Tue Mar 26 05:27:51 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Deployment
NAME                    READY  UP-TO-DATE  AVAILABLE  AGE
authentication-service  1/1    1           1          17d

==> v1/Pod(related)
NAME                                    READY  STATUS       RESTARTS  AGE
authentication-service-966c997c4-mglrb  0/1    Pending      0         0s
authentication-service-966c997c4-wzrkj  1/1    Terminating  0         49m

==> v1/Service
NAME                    TYPE      CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
authentication-service  NodePort  10.108.64.133  <none>       80:31340/TCP  17d


NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services authentication-service)
  echo http://$NODE_IP:$NODE_PORT
$ echo "Deploy completed"
Deploy completed
Job succeeded

And the third project fail:

Running with gitlab-runner 11.7.0 (8bb608ff)
  on runner-gitlab-runner-6c8555c86b-gjt9f XrmajZY2
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image alpine/helm ...
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-xrmajzy2-project-18-concurrent-0bv4bx to be running, status is Pending
Running on runner-xrmajzy2-project-18-concurrent-0bv4bx via runner-gitlab-runner-6c8555c86b-gjt9f...
Cloning repository...
Cloning into '/canhnv5/shipmentmobile'...
Checking out 278cbd3d as master...
Skipping Git submodules setup
$ echo "Deploy test start ..."
Deploy test start ...
$ helm init --upgrade
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.
Error: error installing: deployments.extensions is forbidden: User "system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account" cannot create resource "deployments" in API group "extensions" in the namespace "kube-system"
ERROR: Job failed: command terminated with exit code 1

I could see they use the same runner XrmajZY2 that I install in the first project, same k8s namespace gitlab-managed-apps.

I think they use privilege mode but don't know why the second can get the right permission, and the third can not? Should I create user system:serviceaccount:shipment-mobile-service:shipment-mobile-service-service-account and assign to cluster-admin?

Thanks to @cookiedough's instruction. I do these steps:

  • Fork the canhv5/shipment-mobile-service into my root account root/shipment-mobile-service.

  • Delete gitlab-managed-apps namespace without anything inside, run kubectl delete -f gitlab-admin-service-account.yaml.

  • Apply this file then get the token as @cookiedough guide.

  • Back to root/shipment-mobile-service in Gitlab, Remove previous Cluster. Add Cluster back with new token. Install Helm Tiller then Gitlab Runner in Gitlab UI.

  • Re run the job then the magic happens. But I still unclear why canhv5/shipment-mobile-service still get the same error.

1
Have you already created the gitlab-admin serviceaccount?cookiedough
Instead of that token use the one I mentioned in the answer below.cookiedough

1 Answers

1
votes

Before you do the following, delete the gitlab-managed-apps namespace:

kubectl delete namespace gitlab-managed-apps

Reciting from the GitLab tutorial you will need to create a serviceaccount and clusterrolebinding got GitLab, and you will need the secret created as a result to connect your project to your cluster as a result.

Create a file called gitlab-admin-service-account.yaml with contents:

 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: gitlab-admin
   namespace: kube-system
 ---
 apiVersion: rbac.authorization.k8s.io/v1beta1
 kind: ClusterRoleBinding
 metadata:
   name: gitlab-admin
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
 subjects:
 - kind: ServiceAccount
   name: gitlab-admin
   namespace: kube-system

Apply the service account and cluster role binding to your cluster:

kubectl apply -f gitlab-admin-service-account.yaml

Output:

 serviceaccount "gitlab-admin" created
 clusterrolebinding "gitlab-admin" created

Retrieve the token for the gitlab-admin service account:

 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep gitlab-admin | awk '{print $1}')

Copy the <authentication_token> value from the output:

Name:         gitlab-admin-token-b5zv4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=gitlab-admin
              kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      <authentication_token>

Follow this tutorial to connect your cluster to the project, otherwise you will have to stitch up the same thing along the way with a lot more pain!