To expand on jstewart379's answer:
The fundamental way of handling this is to allow your EC2 instances to access that repository in some manner.
For this to work, your Gitlab instance (not the repo) will need to be public or will at least need to allow access to the EC2 instances (For example, by modifying the security group of Gitlab to allow Port 80 and Port 443 access to the EC2 instance security group).
After that you can choose to authenticate via any of the methods your Gitlab instance supports (This is generally either SSH Key or HTTP Creds).
For an the SSH key method, you should setup a read-only deploy key (Don't use your personal SSH key) within Gitlab.
https://docs.gitlab.com/ee/ssh/#per-repository-deploy-keys
After this you can choose to install this key on the instance in a number of ways. You will use the User-data option of your ASG to handle all of this.
My preferred method is to load the key onto the instance via a encrypted, private, S3 bucket.
resource "aws_s3_bucket_object" "s3_object_deploy_key" {
key = "id_rsa"
bucket = "${aws_s3_bucket.s3_secrets.id}"
source = "secrets/id_rsa"
}
Important Note: Be sure to add that secrets directory to your .gitignore or you're gonna have a bad time.
After uploading the key to the bucket, grant read only access to this bucket via an IAM Instance Role.
That would look something like this:
resource "aws_iam_policy" "iam-policy-s3-deploy-key" {
name = "${var.cluster_name}-${var.env}-read-deploy-key"
path = "/"
description = "Allow reading from the S3 bucket"
policy = <<EOF
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListBucketByTags",
"s3:GetLifecycleConfiguration",
"s3:GetBucketTagging",
"s3:GetInventoryConfiguration",
"s3:GetObjectVersionTagging",
"s3:ListBucketVersions",
"s3:GetBucketLogging",
"s3:ListBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketPolicy",
"s3:GetObjectVersionTorrent",
"s3:GetObjectAcl",
"s3:GetEncryptionConfiguration",
"s3:GetBucketRequestPayment",
"s3:GetObjectVersionAcl",
"s3:GetObjectTagging",
"s3:GetMetricsConfiguration",
"s3:GetIpConfiguration",
"s3:ListBucketMultipartUploads",
"s3:GetBucketWebsite",
"s3:GetBucketVersioning",
"s3:GetBucketAcl",
"s3:GetBucketNotification",
"s3:GetReplicationConfiguration",
"s3:ListMultipartUploadParts",
"s3:GetObject",
"s3:GetObjectTorrent",
"s3:GetBucketCORS",
"s3:GetAnalyticsConfiguration",
"s3:GetObjectVersionForReplication",
"s3:GetBucketLocation",
"s3:GetObjectVersion"
],
"Resource":[
"${data.terraform_remote_state.secret-store.s3_secrets_arn}",
"${data.terraform_remote_state.secret-store.s3_secrets_arn}/*"
]
},
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets",
"s3:HeadBucket"
],
"Resource":"*"
}
]
}
EOF
}
You'd setup an instance role like this and assign that to your Launch Configuration:
data "aws_iam_policy_document" "instance-assume-role-policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["ec2.amazonaws.com"]
}
}
}
resource "aws_iam_role" "iam-role-instance" {
name = "${var.cluster_name}-${var.env}-instance"
path = "/system/"
assume_role_policy = "${data.aws_iam_policy_document.instance-assume-role-policy.json}"
}
resource "aws_iam_role_policy_attachment" "iam-attach-deploy-key" {
role = "${aws_iam_role.iam-role-instance.name}"
policy_arn = "${aws_iam_policy.iam-policy-s3-deploy-key.arn}"
}
After getting the key in place, you can do as you wish with the repository.
Hope that helps!