1
votes

I am trying to deploy my AWS infrastructure using Terraform from GitLab CI CD Pipeline. I am using the GitLab managed image and it's default Terraform template.

I have configured S3 backend and it's pointing to the S3 bucket used to store the tf state file. I had stored CI CD variables in GitLab for: AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET'.

Everything was working fine, until I changed the 'AWS_ACCESS_KEY_ID', 'AWS_SECRET_ACCESS_KEY' and 'S3_BUCKET' which was pointing to a different AWS account.

Now I am getting the following error:

$ terraform init
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
Error: Error loading state:
AccessDenied: Access Denied
status code: 403, request id: XXXXXXXXXXXX,
host id: XXXXXXXXXXXXXXXXXXXXX
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
Cleaning up file based variables 00:00
ERROR: Job failed: exit code 1

Since this issue happened because I changed the access_key and secret_key (It was working fine from my local VS Code), I commented out the 'cache:' block in the .gitlab-ci.yml file and it worked!

The following is my .gitlab-ci.yml file:

.gitlab-ci.yml

stages:
  - validate
  - plan
  - apply
  - destroy



image:
  name: registry.gitlab.com/gitlab-org/gitlab-build-images:terraform
  entrypoint:
    - '/usr/bin/env'
    - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'

# Default output file for Terraform plan
variables:
  PLAN: plan.tfplan
  JSON_PLAN_FILE: tfplan.json
  STATE: dbrest.tfstate

cache:
  paths:
    - .terraform



before_script:
  - alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create       \":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
  - terraform --version
  - terraform init



validate:
  stage: validate
  script:
    - terraform validate
  only:
    - tags


plan:
  stage: plan
  script:
    - terraform plan -out=plan_file
    - terraform show --json plan_file > plan.json
  artifacts:
    paths:
      - plan.json
    expire_in: 2 weeks
    when: on_success
    reports:
      terraform: plan.json
  only:
    - tags
  allow_failure: true 


apply:
  stage: apply
  extends: plan
  environment:
    name: production
  script:
    - terraform apply --auto-approve
  dependencies:
    - plan
  only:
    - tags
  when: manual


terraform destroy:
  extends: apply
  stage: destroy
  script:
    -  terraform destroy --auto-approve
  needs: ["plan","apply"]
  when: manual  
  only:
    - tags  

The issue clearly happens if I don't comment out the below block. However it used to work before I made changes to the AWS access_key and secret_key.

#cache:
#  paths:
#    - .terraform

When the cache was not commented, the following was the result in the CI CI Pipeline:

enter image description here

Is cache being stored anywhere? And How do I clear it? Think it's related to GitLab.

1

1 Answers

2
votes

It seems that runner cache can be cleared off from GitLab from the UI itself. Go to GitLab -> CI CD -> Pipelines and hit the 'Clear Runner Cache' button to clear the cache.

It actually works!