0
votes

I'm using the below Terraform file to create an AKS cluster:

resource "random_pet" "prefix" {}

resource "kubernetes_persistent_volume" "example" {
  metadata {
    name = "example"
  }
  spec {
    capacity = {
      storage = "1Gi"
    }
    access_modes = ["ReadWriteOnce"]
    persistent_volume_source {
      azure_disk {
        caching_mode  = "None"
        data_disk_uri = azurerm_managed_disk.example.id
        disk_name     = "example"
        kind          = "Managed"
      }
    }
  }
}

resource "azurerm_kubernetes_cluster" "example" {
  name                = "${random_pet.prefix.id}-aks"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
  dns_prefix          = "${random_pet.prefix.id}-k8s"

  default_node_pool {
    name            = "example"
    node_count      = 2
    vm_size         = "Standard_D2_v2"
    os_disk_size_gb = 30
  }

  identity {
    type = "SystemAssigned"
  }

  role_based_access_control {
    enabled = true
  }

  addon_profile {
    kube_dashboard {
      enabled = true
    }
  }

  tags = {
    environment = "Demo"
  }
}

provider "azurerm" {
  version = ">=2.20.0"
  features {}
}

resource "azurerm_resource_group" "example" {
  name     = "${random_pet.prefix.id}-rg"
  location = "westus2"
}


resource "azurerm_managed_disk" "example" {
  name                 = "example"
  location             = azurerm_resource_group.example.location
  resource_group_name  = azurerm_resource_group.example.name
  storage_account_type = "Standard_LRS"
  create_option        = "Empty"
  disk_size_gb         = "1"
  tags = {
    environment = azurerm_resource_group.example.name
  }
}

I've derived the above file from Terraform's tutorial on setting up an AKS cluster: https://learn.hashicorp.com/tutorials/terraform/aks

And I've used Terraform's example of setting up an Azure managed disk and k8s volume here: https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/persistent_volume

When I try to run the above config with Terraform I get the following error:

Error: Post "https://pumped-llama-k8s-419df981.hcp.westus2.azmk8s.io:443/api/v1/persistentvolumes": dial tcp: lookup pumped-llama-k8s-419df981.hcp.westus2.azmk8s.io on 192.168.1.1:53: no such host

  on main.tf line 3, in resource "kubernetes_persistent_volume" "example":
   3: resource "kubernetes_persistent_volume" "example" {

I get this same error whenever I attempt to use any non-azurerm Terraform resource. Eg. when trying to configure roles and role binding: resource "kubernetes_role"

I'm assuming by the error message's url - trying to connect to HashiCorp Cloud Platform, which I assume is the default - that I need to explicitly tell these non-azurerm resources that I am connecting to an Azure hosted Kubernetes; however I can't figure out how to do that.

1
It seems it cannot connect to the AKS from your local machine, how about the output of the command kubectl config view or kubectl get nodes?Charles Xu
Sounds more like a DNS name resolution issue.. is nslookup working?harshavmb
I don't think it's an issue with connectivity to AKS, as the remainder of the Terraform resources are created; I can go to the AKS cluster on Azure, and it's all there and working. It's just any Terraform resources that are kubernetes specific like 'kubernetes_persistent_volume" or "kubernetes_role" that seem to fail. But it looks like it's failing because Terraform is attempting to connect to a non-extant HashiCorp Cloud server rather than my AKS instance.Sam Carswell

1 Answers

0
votes

Turns out I needed to define the kubernetes provider in the Terraform file - surprised I wouldn't get some kind of a warning for not including it, considering I'm interacting with it's resources.

Here's what I did to fix it:

outputs.tf:

output "host" {
  value = azurerm_kubernetes_cluster.default.kube_config.0.host
}

output "client_key" {
  value = azurerm_kubernetes_cluster.default.kube_config.0.client_key
}

output "client_certificate" {
  value = azurerm_kubernetes_cluster.default.kube_config.0.client_certificate
}

output "kube_config" {
  value = azurerm_kubernetes_cluster.default.kube_config_raw
}

output "cluster_ca_certificate" {
  value = azurerm_kubernetes_cluster.default.kube_config.0.cluster_ca_certificate
}

main.tf:

...
provider "kubernetes" {
  version = "=1.13.2"
  load_config_file = "false"

  host = azurerm_kubernetes_cluster.default.kube_config.0.host
  
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.default.kube_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.default.kube_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.default.kube_config.0.cluster_ca_certificate)}"
}
...