0
votes

our openstack team is in the process of shutting down some ceph storage, so we have a team that downloaded their state file from openstack ceph, then copied up that state file to another storage medium in-house S3.

The backend config file is set up correctly to point to S3, however when you run a plan for example terraform thinks that none of the infrastructure is up and running eventhough that state file is not corrupt, it reads fine, and believes 55 things need to be brought up.

Everything is on Openstack, so the terraforming tools we've searched for are AWS only, import and refresh do not do anything, its as if those commands don't like openstacks resource ID.

I know the big hammer is to let terraform run from scratch, then I believe things would be fine. Any other ideas/tools?

1
no codes, no error logs, vote to close itBMW

1 Answers

0
votes
**Solved.** 

Dual data center terraform: Multi-datacenter setup:
original error: (coming from Drone when the pipeline runs)
initializing the backend...
Error configuring the backend "s3": 3 error(s) occurred:

* "region": required field is not set`enter code here`
* "key": required field is not set
* "bucket": required field is not set

**Solve:**
users backend tfstate file in the S3 init stanza had ("_") vs. ("-") 

Client's Terraform init_options has:
```backend_config:
        # - "backend=true"
        # - "endpoint=https://ttc.toss.target.com"
        - "bucket=dev-BigLizard-terraform"
        - "key=dev-ttc-bfl/terraform.tfstate"
        - "region=ttc"```

Corrected option:
```backend-config:
        # - "backend=true"
        # - "endpoint=https://ttc.toss.target.com"
        - "bucket=dev-BigLizard-terraform"
        - "key=dev-ttc-bfl/terraform.tfstate"
        - "region=ttc"```