1
votes

I have the following config in Terraform

resource "aws_dynamodb_table" "scanner" {
name = "scanner"
read_capacity = 2
write_capacity = 1
hash_key = "public_ip"
attribute {
    name = "public_ip"
    type = "S"
}
attribute {
    name = "region"
    type = "S"
}
attribute {
    name = "account_id"
    type = "N"
}
global_secondary_index {
    name = "cleanup-index"
    hash_key = "account_id"
    range_key = "region"
    read_capacity = 1
    write_capacity = 1
    projection_type = "INCLUDE"
    non_key_attributes = ["vpc_id", "instance_id", "integration_id", "private_ip"]
}
}

It worked perfect until I've upgraded from Terraform 0.7.13 to 0.9.6. Since then, Terraform tries to re-create the index every time:

~ aws_dynamodb_table.scanner
global_secondary_index.3508752412.hash_key:             "" => "account_id"
global_secondary_index.3508752412.name:                 "" => "cleanup-index"
global_secondary_index.3508752412.non_key_attributes.#: "0" => "4"
global_secondary_index.3508752412.non_key_attributes.0: "" => "vpc_id"
global_secondary_index.3508752412.non_key_attributes.1: "" => "instance_id"
global_secondary_index.3508752412.non_key_attributes.2: "" => "integration_id"
global_secondary_index.3508752412.non_key_attributes.3: "" => "private_ip"
global_secondary_index.3508752412.projection_type:      "" => "INCLUDE"
global_secondary_index.3508752412.range_key:            "" => "region"
global_secondary_index.3508752412.read_capacity:        "" => "1"
global_secondary_index.3508752412.write_capacity:       "" => "1"
global_secondary_index.3860163270.hash_key:             "account_id" => ""
global_secondary_index.3860163270.name:                 "cleanup-index" => ""
global_secondary_index.3860163270.non_key_attributes.#: "4" => "0"
global_secondary_index.3860163270.non_key_attributes.0: "vpc_id" => ""
global_secondary_index.3860163270.non_key_attributes.1: "instance_id" => ""
global_secondary_index.3860163270.non_key_attributes.2: "private_ip" => ""
global_secondary_index.3860163270.non_key_attributes.3: "integration_id" => ""
global_secondary_index.3860163270.projection_type:      "INCLUDE" => ""
global_secondary_index.3860163270.range_key:            "region" => ""
global_secondary_index.3860163270.read_capacity:        "1" => "0"
global_secondary_index.3860163270.write_capacity:       "1" => "0"

Terraform say in their doc: The DynamoDB API expects attribute structure (name and type) to be passed along when creating or updating GSI/LSIs or creating the initial table. In these cases it expects the Hash / Range keys to be provided; because these get re-used in numerous places (i.e the table's range key could be a part of one or more GSIs), they are stored on the table object to prevent duplication and increase consistency. If you add attributes here that are not used in these scenarios it can cause an infinite loop in planning. But I don't think my config is related to that. Any similar experiences? I suspect relation to this. Thanks!

1

1 Answers

2
votes

Sometimes the underlying provider APIs do normalization or restructuring of data submitted by Terraform, so that the data is different when it is read back.

It seems that this is an example of such a situation. In the configuration the non_key_attributes are listed as ["vpc_id", "instance_id", "integration_id", "private_ip"] but they are being returned from the API as ["vpc_id", "instance_id", "private_ip", "integration_id"].

It is a bug in Terraform that it is not considering these two as equivalent, if indeed (as it appears) the ordering is not sensitive and the DynamoDB API can return them in a different order than submitted.

As a workaround until this bug is fixed, it may work to reorder the list in config to match what the API is returning, which should then cause Terraform to no longer see a diff. This should work as long as the API returns the list in a consistent order from one request to the next.