1
votes

I have Kubernetes cluster in GCP (GKE) that cannot connecto to Memorystore(Redis).

All of the resources related to the project are in a dedicated network.

network module:

resource "google_compute_network" "my_project" {
  name = "my_project"
  auto_create_subnetworks = true
}

output "my_project_network_self_link" {
  value = google_compute_network.my_project_network.self_link
}

I use the network in the GKE cluster (network = "${var.network_link}"):

resource "google_container_cluster" "my_project" {
  name     = "my_project-cluster"
  location = "us-central1"
  node_locations = ["us-central1-a", "us-central1-b"]

  network = "${var.network_link}"

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }
}

Node pools omitted.

and I set the network as authorized_network in the memorystore configuration:

resource "google_redis_instance" "cache" {
  name           = "my_project-redis"
  tier           = "STANDARD_HA"
  memory_size_gb = 1
  authorized_network = "${var.network_link}"

  # location_id             = "us-central1-a"

  redis_version     = "REDIS_4_0"
  display_name      = "my_project Redis cache"
}

variable "network_link" {
  description = "The link of the network instance is in"
  type        = string
}

I guess that the problem is related to the network, because previously using the default network this was working fine.

Currently the GKE nodes are in us-central1-a and us-central1-b(specified in the TF script) and memory store is in us-central1-c. So the cluster and the Redis are in the same VPC but in different sub-networks. Could this be the problem?

1

1 Answers

1
votes

I had to add the following section to the cluster module in terraform:

  ip_allocation_policy {
    cluster_ipv4_cidr_block = ""
    services_ipv4_cidr_block = ""
  }

This seems to enable the VPC-native (alias IP) property of the cluster.