I'm upgrading an Aurora RDS from Aurora 1.x (MySQL 5.6) to Aurora 2.x (MySQL 5.7). I've already done it in AWS (manually through a snapshot, as it is still not supported by a simple click), and I'm trying to state rm and import it into Terraform (I've already upgrade the code in Terraform), it succeeds, but then terraform plan wants to destroy and recreate the cluster (and therefore the instance too) for this reasons:
availability_zones.#: "3" => "2" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "eu-west-1b"
availability_zones.3953592328: "eu-west-1a" => "eu-west-1a"
availability_zones.94988580: "eu-west-1c" => "" (forces new resource)
[...]
engine: "aurora-mysql" => "aurora" (forces new resource)
[...]
The changes I did in TF were, for the RDS cluster:
- engine = "aurora"
- engine_version = "5.6.10a"
+ engine = "aurora-mysql"
+ engine_version = "5.7.12"
And for the parameter groups (both cluster and instance):
- family = "aurora5.6"
+ family = "aurora-mysql5.7"
The parameter groups are imported OK.
I suspect the problem may be because it tries to change the correct engine "aurora-mysql" for the wrong one "aurora", but why?! It's imported Ok and it's also OK in my Terraform code. Is that a TF bug? I can't find anything.
I'm using Terraform v0.11.7
From the doc:
The engine name for Aurora MySQL 2.x is aurora-mysql; the engine name for Aurora MySQL 1.x continues to be aurora. The engine version for Aurora MySQL 2.x is 5.7.12; the engine version for Aurora MySQL 1.x continues to be 5.6.10a. The default parameter group for Aurora MySQL 2.x is default.aurora-mysql5.7; the default parameter group for Aurora MySQL 1.x continues to be default.aurora5.6. The DB cluster parameter group family name for Aurora MySQL 2.x is aurora-mysql5.7; the DB cluster parameter group family name for Aurora MySQL 1.x continues to be aurora5.6.