0
votes

I am trying to get terraform to perform terraform init in a specific root directory, but somehow the pipeline doesn't recognize it. Might there be something wrong with the structure of my gitlab-ci.yml file? I have tried moving everything to the root directory, which works fine, but I'd like to have a bit of a folder structure in the repository, in order to make it more readable for future developers.

default:
  tags:
    - aws

image:
  name: hashicorp/terraform:light
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

variables:
  # If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
  TF_STATE_NAME: default
  TF_CACHE_KEY: default
  # If your terraform files are in a subdirectory, set TF_ROOT accordingly
  TF_ROOT: ./src/envs/infrastruktur

before_script:
  - rm -rf .terraform
  - terraform --version
  - export AWS_ACCESS_KEY_ID
  - export AWS_ROLE_ARN
  - export AWS_DEFAULT_REGION
  - export AWS_ROLE_ARN

stages:
  - init
  - validate
  - plan
  - pre-apply

init:
  stage: init
  script:
      - terraform init

Everything is fine until the validate stage, but as soon as the pipeline comes to the plan stage, it says that it cannot find any config files.

validate:
  stage: validate
  script:
    - terraform validate

plan:
  stage: plan
  script:
    - terraform plan -out "planfile"
  dependencies:
    - validate
  artifacts:
    paths:
      - planfile

apply:
  stage: pre-apply
  script:
    - terraform pre-apply -input=false "planfile"
  dependencies:
    - plan
  when: manual
1
I don't see any terraform init being executed in your gitlab-ci.yml - danielnelz
I had to shorten the script in order to post it. Would the unit stage be the part where I include the working directory? It was initially part of the before script - Lucky
It is easier helping you if you post your complete pipeline - danielnelz
Point taken, I have updated the post. - Lucky

1 Answers

0
votes

You need to cd in your configration folder in every job and after each job you need to pass the content of /src/envs/infrastruktur where terraform is operating on to the next job via artifacts. I omitted the remainder of your pipeline for brevity.

before_script:
  - rm -rf .terraform
  - terraform --version
  - cd $TF_ROOT
  - export AWS_ACCESS_KEY_ID
  - export AWS_ROLE_ARN
  - export AWS_DEFAULT_REGION
  - export AWS_ROLE_ARN

stages:
  - init
  - validate
  - plan
  - pre-apply

init:
  stage: init
  script:
      - terraform init
  artifacts:
    paths:
      - $TF_ROOT

validate:
  stage: validate
  script:
    - terraform validate
  artifacts:
    paths:
      - $TF_ROOT

plan:
  stage: plan
  script:
    - terraform plan -out "planfile"
  dependencies:
    - validate
  artifacts:
    paths:
      - planfile
      - $TF_ROOT