So far we have been using GKE public cluster for all our workloads. We have created a second, private cluster (still GKE) with improved security and availability (old one is single zone, new one is regional cluster). We are using Gitlab.com for our code, but using self-hosted Gitlab CI runner in the clusters.
The runner is working fine on the public cluster, all workloads complete successfully. However on the private cluster, all kubectl commands of thr CI fail with Unable to connect to the server: dial tcp <IP>:443: i/o timeout error
. The CI configuration has not changed - same base image, still using gcloud SDK with a CI-specific service account to authenticate to the cluster.
Both clusters have master authorized networks enabled and have only our office IPs are set. Master is accessible from a public IP. Authentication is successful, client certificate & basic auth are disabled on both. Cloud NAT is configured, nodes have access to the Internet (can pull container images, Gitlab CI can connect etc).
Am I missing some vital configuration? What else should I be looking at?