1
votes

I am trying to set up a new build pipeline for one of our projects. In a first step I am building a new docker image for successive testing. This step works fine. However, when the test jobs are executed, the image is pulled, but the commands are running on the host instead of the container.

Here's the contents of my gitlab-ci.yml:

stages:
  - build
  - analytics

variables:  
  TEST_IMAGE_NAME: 'registry.server.de/testimage'

build_testing_container:
  stage: build

  image: docker:stable
  services:
  - dind

  script:
    - docker build --target=testing -t $TEST_IMAGE_NAME .
    - docker push $TEST_IMAGE_NAME

mess_detection:
  stage: analytics
  image: $TEST_IMAGE_NAME

  script:
    - vendor/bin/phpmd app html tests/md.xml --reportfile mess_detection.html --suffixes php

  artifacts:
    name: "${CI_JOB_NAME}_${CI_COMMIT_REF_NAME}"
    paths:
      - mess_detection.html
    expire_in: 1 week
    when: always

  except:
    - production

  allow_failure: true

What do I need to change to make gitlab runner execute the script commands inside the container it's successfully pulling?

UPDATE:

It's getting even more interesting: I just changed the script to sleep for a while so I can attach to the container. When I run a pwd from the ci script, it says /builds/namespace/project. However, running pwd on the server with docker exec using the exact same container, it returns /app as it is supposed to.

UPDATE2:

After some more research, I learned that gitlab executes four sub-steps for each build step:

After some more research, I found that gitlab runs 4 sub-steps for each build step:

  1. Prepare : Create and start the services.
  2. Pre-build : Clone, restore cache and download artifacts from previous stages. This is run on a special Docker Image.
  3. Build : User build. This is run on the user-provided docker image.
  4. Post-build : Create cache, upload artifacts to GitLab. This is run on a special Docker Image.

It seems like in my case, step 3 isn't executed properly and the command is still running inside the gitlab runner docker image.

UPDATE3 In the meantime I tested executing the mess_detection step on an separate machine using the command gitlab-runner exec docker mess_detection. The behaviour is the exact same. So it's not gitlab specific, but has to be some configuration option in either the deployment script or the runner config.

1
DId you try in your script doing cd into an specific directory defined in your image?gasc
I did try that. The the directory I created in my image to host the application (/app) is not present at all.Daniel Becker
Are you using a usign a runner that you set up? Did you try using the tags: docker key in your job definition?gasc
I went as far as starting the runner by hand enforcing docker execution with just one build step.Daniel Becker
I've opened an issue for this on gitlab: gitlab.com/gitlab-org/gitlab-runner/issues/3805Daniel Becker

1 Answers

0
votes

this is the usual behavior The image keyword is the name of the Docker image the Docker executor will run to perform the CI tasks. you can use The services keyword which defines just another Docker image that is run during your job and is linked to the Docker image that the image keyword defines. This allows you to access the service image during build time. access can be done by a script or entry-points for example : in the docker file of the image you are going to build add a script that you want to execute like that :

ADD exemple.sh /

RUN chmod +x exemple.sh

then you can add the image as a service in gitlab-ci and the script would change to :

docker exec <container_name> /exemple.sh

this will run a script inside the container or specify an entrypoint to the docker image and then the script would be :

docker exec <container> /bin/sh -c "cmd1;cmd2;...;cmdn"

here's a reference :

https://docs.gitlab.com/ee/ci/docker/using_docker_images.html