1
votes

The following terraform resource creates an AKS cluster with a Virtual Machine Scale Set (VMSS) and a Load Balancer (LB) resource. Currently, diagnostic logs are enabled on the cluster resource by adding oms_agent section under addon_profile.

However, the documentation does not mention if there is a way to enable diagnostics on the VMSS created by default_node_pool and LB created by network_profile. Is this possible via terraform?

Alternatively, is there a fixed naming scheme for the VMSS and LB created by the cluster? If there is a fixed naming scheme, one solution to this problem would be to simply look for resources with these predefined names in the correct resource group to create log analytics solution.

Terraform Documentation:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#default_node_pool https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/kubernetes_cluster#load_balancer_profile

    resource "azurerm_kubernetes_cluster" "aks-cluster" {
      resource_group_name             = azurerm_resource_group.aks-rg.name
      location                        = azurerm_resource_group.aks-rg.location
      name                            = "my-cluster"
      dns_prefix                      = "my-cluster-aks"
      kubernetes_version              = "1.18.8"
      private_cluster_enabled         = false
      node_resource_group             = "MC_my-cluster-aks"
      api_server_authorized_ip_ranges = [var.authorized_ip]
      service_principal {
        client_id     = var.sp_client_id
        client_secret = var.client_secret
      }
      default_node_pool {
        name                = "default"
        type                = "VirtualMachineScaleSets"
        vm_size             = "Standard_D2_v2"
        node_count          = 4
        enable_auto_scaling = true
        min_count           = 4
        max_count           = 6
        vnet_subnet_id      = azurerm_subnet.aks-vnet-subnet.id
      }
      network_profile {
        network_plugin     = "azure"
        network_policy     = "azure"
        docker_bridge_cidr = var.aks_docker_bridge_cidr
        dns_service_ip     = var.aks_dns_service_ip
        load_balancer_sku  = "standard"
        service_cidr       = var.aks_service_cidr
      }
      addon_profile {
        oms_agent {
          enabled                    = true
          log_analytics_workspace_id = azurerm_log_analytics_workspace.aks_log_ws.id
        }
      }
    }
1
What is actually you expect? - Charles Xu
A way to enable diagnostic logs on VMSS and LB resources that get created during the creation of AKS cluster by above code. - kjd
@kjd any success with this? Having the same task. - hazzik
@hazzik No, there is currently no way to do this via terraform since the name of the VMSS created is not known. The best way to do this would be to run a script after terraform has created the resource to look for the name of the VMSS and enable diagnostics on it. However, the load balancer is always named kubernetes. This naming related information was confirmed by Azure. - kjd

1 Answers

2
votes

The names of the loadbalancers are fixed to kubernetes and kubernetes-internal. They sit in the azurerm_kubernetes_cluster.aks-cluster.node_resource_group group. However, because load balancers are dynamic and created only when you have a service with a type of LoadBalancer, I doubt that you'd be able to enable monitoring via terraform.

For the VMSS there is a scheme to generate the name: https://github.com/Azure/aks-engine/blob/29c25089d4fa635cb90a3a2cd21d14af47deb40a/pkg/api/types.go#L929-L947 however it would probably be impossible to implement in terraform. So I would consider it as no-go.

Also, there was an issue created in azurerm terraform provider to provide the name of the VMSS of the cluster. However, it was closed as won't fix.

So, solving the same problem I had to resort to azurerm_resources data source

data "azurerm_resources" "aks-cluster-vmss" {
  resource_group_name = "MC_${azurerm_resource_group.aks-rg.name}_my-cluster_${azurerm_resource_group.aks-rg.location}"
  type                = "Microsoft.Compute/virtualMachineScaleSets"
}

resource "azurerm_virtual_machine_scale_set_extension" "monitoring" {
  count = length(data.azurerm_resources.aks-cluster-vmss.resources)

  name                         = "MMAExtension"
  virtual_machine_scale_set_id = data.azurerm_resources.aks-cluster-vmss.resources[count.index].id
  publisher                    = "Microsoft.EnterpriseCloud.Monitoring"
  type                         = "OmsAgentForLinux"
  type_handler_version         = "1.13"
  auto_upgrade_minor_version   = true

  settings = <<SETTINGS
  {
     "workspaceId": "${azurerm_log_analytics_workspace.aks_log_ws.workspace_id}"
  }
SETTINGS

  protected_settings = <<SETTINGS
  {
      "workspaceKey": "${azurerm_log_analytics_workspace.aks_log_ws.primary_shared_key}"
  }
SETTINGS

  depends_on = [ azurerm_kubernetes_cluster.aks-cluster ]
}

If you do not want to calculate the resource_group name for azurerm_resources then you can move this code to a module and pass azurerm_kubernetes_cluster.aks-cluster.node_resource_group as a group name. This is because it is not possible to have a dynamic variable in the count block. Alternatively, if you know the expected number of VMSSs you can hardcode this number.