3
votes

I am wondering how to stop the infinite loop in the error message so that it creates AWS EC2 instance?

Terraform code below:

 provider "aws" {
  region = "${var.location}"
}

resource "aws_instance" "ins1_ec2" {
  ami           = "${var.ami}"
  instance_type = "${var.inst_type}"

  tags = {
    Name = "cluster"
  }
  provisioner "remote-exec" {
    inline = [
      "hostnamectl set-hostname centos-76-1",
    ]
  }
}

resource "aws_eip" "ins1_eip" {
  instance = "${aws_instance.ins1_ec2.id}"
  vpc      = false
}

resource "aws_instance" "ins2_ec2" {
  ami           = "${var.ami}"
  instance_type = "${var.inst_type}"

 provisioner "remote-exec" {
    inline = [
      "hostnamectl set-hostname centos-76-2",
    ]
  }

  tags = {
    Name = "cluster"
  }
}

resource "aws_eip" "ins2_eip" {
  instance = "${aws_instance.ins2_ec2.id}"
  vpc      = false
}

It errors out with the below message:

* aws_instance.ins2_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys
* aws_instance.ins1_ec2: timeout - last error: ssh: handshake failed: agent: failed to list keys

I have a pem file on my laptop which I can get it on my AWS Build server, so I can use key_name in EC2 instance creation? The pem file name as "test.pem" which I have is the private key?

What I don't know is how to login to VM, with key_name (test.pem) which I already have or with username/password. There does not seem to be a provision to create username and password in aws_instance block.

Terraform EC2 instance documentation is at the link below: https://www.terraform.io/docs/providers/aws/r/instance.html

2
If you have key pair created on AWS console then you need to provide the name of that key in terraform script . Read here. Once you have created the instance you can SSH to it using the pem file you downloaded for the same key. Read hereChetan
* aws_instance.ins2_ec2: interrupted - last error: ssh: handshake failed: agent: failed to list keys * aws_instance.ins1_ec2: 1 error(s) occurred: * aws_instance.ins1_ec2: Error launching source instance: InvalidKeyPair.NotFound: The key pair 'SrinivasTest.pem' does not exist status code: 400, request id: 81cab80d-48e6-43c1-aa53-807417599e33learner
[centos@ip-172-31-29-250 terraform]$ ls SrinivasTest.pem SrinivasTest.pem [centos@ip-172-31-29-250 terraform]$ pwd /data/terraform [centos@ip-172-31-29-250 terraform]$ ls main.tf main.tf [centos@ip-172-31-29-250 terraform]$ cat main.tf |grep key_name key_name = "SrinivasTest.pem" [centos@ip-172-31-29-250 terraform]$learner
Added private key on the server in same place where there is main.tf file and refrenced internally as "SrinivasTest.pem" still its not working...learner
You don't need private key or pen file in terraform. You need key name from the aws console. Pem file you will use for doing SSH. But for creating instance you need only name of the key so that teraform will associate the key with the instance.Chetan

2 Answers

3
votes

If you want to attach a key to an EC2 instance while you create it using terraform, you need to first create a key on AWS console, download the .pem file and copy the Key pair name to the clip board.

SampleKey on AWS Console

Terraform script requires the correct key name to associate it to the ec2 instance.

If you want to perform any remote action to the instance from the terraform, following things are required.

  1. The instance should have the IP which terraform can connect to.
  2. Terraform need to connect to the instance via SSH or RDP.
  3. Both the ways require the key file (.pem file) downloaded earlier to be used while making the connection.

So connection is the missing part here in the terraform configuration.

Consider following terraform configuration for creating one t1.micro instance with a key associated with it and then creating a file on the instance by doing SSH into it.

Network requirements, such as vpc, subnet, route tables, internet gateway, security groups etc., are already created in AWS console and theirs respective Ids are being used in the terraform configuration below.

provider "aws" {
    region = "<<region>>",
    access_key="<<access_key>>",
    secret_key="<<secret_key>>"
}

resource "aws_instance" "ins1_ec2" {
    ami           = "<<ami_id>>"
    instance_type = "<<instance_type>>"
    //id of the public subnet so that the instance is accessible via internet to do SSH
    subnet_id = "<<subnet_id>>"

    //id of the security group which has ports open to all the IPs
    vpc_security_group_ids=["<<security_group_id>>"]

    //assigning public IP to the instance is required.
    associate_public_ip_address=true
    key_name = "<<key_name>>"
    tags = {
       Name = "cluster"
    }

    provisioner "remote-exec" {
        inline = [
            //Executing command to creating a file on the instance
            "echo 'Some data' > SomeData.txt",
        ]

        //Connection to be used by provisioner to perform remote executions
        connection {
            //Use public IP of the instance to connect to it.
            host          = "${aws_instance.ins1_ec2.public_ip}"
            type          = "ssh"
            user          = "ec2-user"
            private_key   = "${file("<<pem_file>>")}"
            timeout       = "1m"
            agent         = false
        }
    }
}

resource "aws_eip" "ins1_eip" {
    instance = "${aws_instance.ins1_ec2.id}"
    vpc      = true
}

When you run terraform apply command, if the terraform is able to do SSH to the instance, it should display following message.

enter image description here

You might still see errors, if the commands being executed fails due to some other error or permission issues. But if you see message as above, it means that the terraform has connected to the instance successfully.

That's the terraform configuration which will create an ec2 instance, connect to it via SSH and perform remote execution tasks on it.

The .pem file can also be used to do SSH on the instance from local machine.

This should help you resolve your issue.

More information about connection in terraform is available here

1
votes

The following did work for me,

  1. Create a security group and make sure you added SSH (port 22) with source 0.0.0.0/0 in inbound rules
  2. Copy the ID of the security group and add it in the terraform config for key vpc_security_group_ids list
  3. Head to AWS console, and either create a new key pair or locate the existing key to use.
  4. Get the name of the key pair from console and refer it in terraform config for key key_name
  5. If you created a new key make sure you downloaded the pem file and changed the permission as chmod 400 myPrivateKey.pem
  6. Once after you applied the terraform config, just connect as ssh -i myPrivateKey.pem ec2-user@<public-ip>

Your terraform config for ec2 resource will looks like,

resource "aws_instance" "my-sample" {
  ami                         = "ami-xxxxx"
  instance_type               = "t2.micro"
  associate_public_ip_address = true
  key_name                    = "MyPrivateKey"
  vpc_security_group_ids      = ["sg-0f073685ght54lkm"]
}