4
votes

I am trying to set up a GitLab ce server running in docker, on my local Windows machine for the moment. Trying to configure GitLab CI, I am facing an issue when uploading the artifact at the end of the job:

WARNING: Uploading artifacts as "archive" to coordinator... failed id=245 responseStatus=500 Internal Server Error status=500 token=i3yfe7rf

Job failure in GitLab

Before showing more logs, this is my setup. I am using different containers

  • one for running GitLab
  • one for running the CI runners (gitlab-runner)
  • one for running a container registry
  • one recently added container to store artifacts on a local s3 server (minio)

This is the config.toml file for the only registered runner. Note that this version uses a local s3 server, but the same happens with local cache.

[[runners]]
  name = "Docker Runner"
  url = "http://192.168.1.18:6180/"
  token = "JHubtvs8kFaQjJNC6r6Z"
  executor = "docker"
  clone_url = "http://192.168.1.18:6180/"
  [runners.custom_build_dir]
  [runners.cache]
    Type = "s3"
    Path = "mycustom-s3"
    Shared = true
    [runners.cache.s3]
      ServerAddress = "192.168.1.18:9115"
      AccessKey = "XXXXXX"
      SecretKey = "XXXXXX"
      BucketName = "runner"
      Insecure = true
    [runners.cache.gcs]
    [runners.cache.azure]
  [runners.docker]
    tls_verify = false
    image = "docker:19.03.1"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/cache"]
    shm_size = 0

This is my CI YAML file: I've taken this example from a youtube video. The same happens for all projects in GitLab.

image: "ruby:latest"

cache:
  paths:
    - output

before_script:
  - bundle install --path vendor  # Install dependencies into ./vendor/ruby

build:
  stage: build
  tags:
    - docker,ruby
  artifacts:
    paths:
      - output/
    expire_in: 5 days   
  script:
    - echo "In the build stage"
    - mkdir -p output
    - echo date > output/$(date +%s).txt
    - ls -l output
    - ls -l vendor

Running the job ends with the above mentioned error.

More errors can be seen in the log files:

  • In exceptions_json.log:

{"severity":"ERROR","time":"2020-12-16T11:24:11.865Z","correlation_id":"ZxQ4vVdD1J1","tags.correlation_id":"ZxQ4vVdD1J1","tags.locale":"en","exception.class":"Errno::ENOENT","exception.message":"No such file or directory @ apply2files - /var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/work/1608117851-2655-0006-1409/artifacts.zip"...

  • In Production.log

    Started POST "/api/v4/jobs/245/artifacts/authorize?artifact_format=zip&artifact_type=archive&expire_in=5+days" for 172.17.0.1 at 2020-12-16 11:24:07 +0000 Started POST "/api/v4/jobs/245/artifacts?artifact_format=zip&artifact_type=archive&expire_in=5+days" for 172.17.0.1 at 2020-12-16 11:24:07 +0000 Processing by Gitlab::RequestForgeryProtection::Controller#index as HTML Parameters: {"file.remote_url"=>"", "file.size"=>"389", "file.sha1"=>"da6c0be0e7a3a4791035bc9f851439dcb0e94135", "file.sha256"=>"6539358258571174fb3bed6ab68db78705efdd9ed4b7c423bab0b19eb9aea531", "file.path"=>"/var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/uploads/artifacts.zip609500792", "file.remote_id"=>"", "file.name"=>"artifacts.zip", "file.md5"=>"d432c9507b8879dfad13342c6b60f73b", "file.sha512"=>"5ea4e5b6bcbbffb2d3f81e8c05ede92b630b6033ea3f09dc61a4a4bbc7919088cf4a1eab46cd54e9e994b35908065412779e77caf2612341fed3c36449947bdd", "file.gitlab-workhorse-upload"=>"...", "metadata.name"=>"metadata.gz", "metadata.path"=>"/var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/uploads/metadata.gz123385207", "metadata.remote_url"=>"", "metadata.sha256"=>"93d549eb28b503108a4e9da0cb08cac02cd70041aedcbef418aa5c969d1a0d1e", "metadata.size"=>"175", "metadata.remote_id"=>"", "metadata.sha512"=>"3c7ff2a2a992695c2082c37340be7caa2955e9ba4ff50015c787f790146da1ac7f6884685797db1bc59eb8045bab1fac2fc1300114542059cddcec2593ea5934", "metadata.md5"=>"c7b52bc3b9b2d7dbf780aa919917b562", "metadata.sha1"=>"c71ab07f5bdf21d8d3b5a6507a0747167d4a80de", "metadata.gitlab-workhorse-upload"=>"...", "file"=>#<UploadedFile:0x00007fb805291cf0 @tempfile=#File:/var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/uploads/artifacts.zip609500792, @size=389, @content_type="application/octet-stream", @original_filename="artifacts.zip", @sha256="6539358258571174fb3bed6ab68db78705efdd9ed4b7c423bab0b19eb9aea531", @remote_id="">, "artifact_format"=>"zip", "artifact_type"=>"archive", "expire_in"=>"5 days", "metadata"=>#<UploadedFile:0x00007fb804cdcfa8 @tempfile=#File:/var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/uploads/metadata.gz123385207, @size=175, @content_type="application/octet-stream", @original_filename="metadata.gz", @sha256="93d549eb28b503108a4e9da0cb08cac02cd70041aedcbef418aa5c969d1a0d1e", @remote_id="">} Can't verify CSRF token authenticity. This CSRF token verification failure is handled internally by GitLab::RequestForgeryProtection Unlike the logs may suggest, this does not result in an actual 422 response to the user For API requests, the only effect is that current_user will be nil for the duration of the request Completed 422 Unprocessable Entity in 8ms (ActiveRecord: 0.0ms | Elasticsearch: 0.0ms | Allocations: 241)

    Errno::ENOENT (No such file or directory @ apply2files - /var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/work/1608117848-2659-0005-4872/artifacts.zip): /opt/gitlab/embedded/lib/ruby/gems/2.7.0/gems/carrierwave-1.3.1/lib/carrierwave/sanitized_file.rb:320:in `chmod'...

I've been spending my last 3 days searching the root of this, and despite having read many articles (here or on GitLab support site), I can't get this resolved.

The error suggests this is an issue with file /var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/work/1608117848-2659-0005-4872/artifacts.zip. Definitely, directory /var/opt/gitlab/gitlab-rails/shared/artifacts/tmp/work/ exists. But sub-directory 1608117848-2659-0005-4872 doesn't.

1
I'm having the exact same problem.goroncy
same here, resetting runner tokens did not help (as they suggest in the gitlab issues)cguedel
@bokabraq could you test the same configuration on a Linux machine?goroncy
@goroncy, sorry I don't have a Lunix machine to test this on. Using docker volumes instead of bind mounts did the trick on my side.bokabraq
@cguedel I lost myself in many issues on GitLab support site. It was never exactly the same issue I had though. On the bright side, I learned a lot of stuff reading all thisbokabraq

1 Answers

3
votes

I had the same problem this morning and finally solved it for me.

I was using bind-mounts for the data/config/log volumes in the gitlab container, which apparently cause a problem when uploading the artifacts.

I now switched to using docker volumes and now artifact upload works.