1
votes

self hosted 13.2.2-ee

My pipeline is failing, and I am not sure why. I looked in /var/log/gitlab/gitlab-rails/kubernetes.log but didn’t see anything

My stages for building and sending my docker images to the repo are working, but for the review stage to test deploy the application on my kubernetes cluster it is failing, with no logs to explain why.

Failed Pipline Img

Failed Pipeline Img 2

Error Screen after Fail

Documentation

Also I thought installing elastic stack might help with the logs (other applications installed fine) but this happens:

Elastic Stack

My gitlab-ci.yml file:

    services:
     - docker:18-dind
    
    stages:
      - build
      - review
    
    variables:
      DOCKER_HOST: tcp://localhost:2375
    
    compile-php:
      stage: build
      image: docker:stable
      before_script:
        - docker login gitlab.mygitlabdomain.com:5050 -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD}
      script:
        - docker build -t "${CI_REGISTRY}/${CI_PROJECT_PATH}/php-fpm" ./php-fpm
        - docker tag "${CI_REGISTRY}/${CI_PROJECT_PATH}/php-fpm:latest" "${CI_REGISTRY}/${CI_PROJECT_PATH}/php-fpm:${CI_COMMIT_REF_NAME}"
        - docker push "${CI_REGISTRY}/${CI_PROJECT_PATH}/php-fpm:${CI_COMMIT_REF_NAME}"
    
    compile-ngnix:
      stage: build
      image: docker:stable
      before_script:
        - docker login gitlab.mygitlabdomain.com:5050 -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD}
      script:
        - docker build -t "${CI_REGISTRY}/${CI_PROJECT_PATH}/nginx" ./nginx
        - docker tag "${CI_REGISTRY}/${CI_PROJECT_PATH}/nginx:latest" "${CI_REGISTRY}/${CI_PROJECT_PATH}/nginx:${CI_COMMIT_REF_NAME}"
        - docker push "${CI_REGISTRY}/${CI_PROJECT_PATH}/nginx:${CI_COMMIT_REF_NAME}"
    
    
    deploy_review:
      image:
        name: lachlanevenson/k8s-kubectl:latest
        entrypoint: ["/bin/sh", "-c"]
      stage: review
      only:
        - branches
      except:
        - tags
      environment:
        name: staging
        url: https://$CI_ENVIRONMENT_SLUG-projectdomain.com
        on_stop: stop_review
        kubernetes:
          namespace: projectdomain
      script:
        - kubectl version
        - cd deployments/
        - sed -i "s~__CI_REGISTRY_IMAGE__~${CI_REGISTRY_IMAGE}~" deployment.yaml
        - sed -i "s/__CI_ENVIRONMENT_SLUG__/${CI_ENVIRONMENT_SLUG}/" deployment.yaml ingress.yaml service.yaml
        - sed -i "s/__VERSION__/${CI_COMMIT_REF_NAME}/" deployment.yaml ingress.yaml service.yaml
        - |
          if kubectl apply -f deployment.yaml | grep -q unchanged; then
              echo "=> Patching deployment to force image update."
              kubectl patch -f deployment.yaml -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"ci-last-updated\":\"$(date +'%s')\"}}}}}"
          else
              echo "=> Deployment apply has changed the object, no need to force image update."
          fi
        - kubectl apply -f service.yaml || true
        - kubectl apply -f ingress.yaml
        - kubectl rollout status -f deployment.yaml
        - kubectl get deploy,svc,ing,pod -l app="$(echo ${CI_PROJECT_NAME} | tr "." "-")",ref="${CI_ENVIRONMENT_SLUG}"
    
    stop_review:
      image:
        name: lachlanevenson/k8s-kubectl:latest
        entrypoint: ["/bin/sh", "-c"]
      stage: review
      variables:
        GIT_STRATEGY: none
      when: manual
      only:
        - branches
      except:
        - master
        - tags
      environment:
        name: staging
        action: stop
        kubernetes:
          namespace: projectdomain
      script:
        - kubectl version
        - kubectl delete ing -l ref=${CI_ENVIRONMENT_SLUG}
        - kubectl delete all -l ref=${CI_ENVIRONMENT_SLUG}

deployment.yaml file:

    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: projectdomain-__CI_ENVIRONMENT_SLUG__
      labels:
        app: projectdomain
        ref: __CI_ENVIRONMENT_SLUG__
        track: stable
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: projectdomain
          ref: __CI_ENVIRONMENT_SLUG__
      template:
        metadata:
          labels:
            app: projectdomain
            ref: __CI_ENVIRONMENT_SLUG__
            track: stable
        spec:
          containers:
          - name: app
            image: __CI_REGISTRY_IMAGE__:__VERSION__
            imagePullPolicy: Always
            ports:
            - name: http-metrics
              protocol: TCP
              containerPort: 8000
            livenessProbe:
              httpGet:
                path: /health
                port: 8000
              initialDelaySeconds: 3
              timeoutSeconds: 2
            readinessProbe:
              httpGet:
                path: /health
                port: 8000
              initialDelaySeconds: 3
              timeoutSeconds: 2
1
You posted all those images, including a screenshot of documentation(!!??), and yet not one shred of log output or debugging attempts you have triedmdaniel
I mentioned I looked in /var/log/gitlab/gitlab-rails/kubernetes.log but there was nothing written there. I tried to install elastic stack for more log info but it would not install.Jake
POst the runner logJoao Vitorino
Where can I find this log? Thank youJake

1 Answers

0
votes

You could try the same pod deployment with GitLab 13.3 (August 2020)

Its integrated dashboard could provide additional clues.

Kubernetes Pod health dashboard

Seeing all of your system’s metrics in one place, including cluster metrics, is critical for understanding the health of your system. In GitLab 13.3, you can view the health of your Kubernetes pods in the new out-of-the-box Pod health metrics dashboard, whether you are using a managed Prometheus, or have an existing instance of Prometheus running in your cluster, by connecting it to your GitLab instance. Select the desired pod in the dropdown and see key metrics such as CPU, memory usage, Network, and more.

See Documentation and Issue.

You can see that dashboard in action in this video.