1
votes

I'm running a Kubernetes cluster and HA redis VMs on the same VPC on Google Cloud Platform. ICMP and traffic on all TCP and UDP ports is allowed on the subnet 10.128.0.0/20. Kubernetes has its own internal network, 10.12.0.0/14, but the cluster runs on VMs inside of 10.128.0.0/20, same as redis VM.

However, even though the VMs inside of 10.128.0.0/20 see each other, I can't ping the same VM or connect to its ports while running commands from Kubernetes pod. What would I need to modify either in k8s or in GCP firewall rules to allow for this - I was under impression that this should work out of the box and pods would be able to access the same network that their nodes were running on?

kube-dns is up and running, and this k8s 1.9.4 on GCP.

1

1 Answers

2
votes

I've tried to reproduce your issue with the same configuration, but it works fine. I've create a network called "myservernetwork1" with subnet 10.128.0.0/20. I started a cluster in this subnet and created 3 firewall rules to allow icmp, tcp and udp traffic inside the network.

$ gcloud compute firewall-rules list --filter="myservernetwork1"
    myservernetwork1-icmp  myservernetwork1  INGRESS    1000      icmp
    myservernetwork1-tcp   myservernetwork1  INGRESS    1000      tcp
    myservernetwork1-udp   myservernetwork1  INGRESS    1000      udp

I allowed all TCP, UDP and ICMP traffic inside the network. I created a rule for icmp protocol for my sub-net using this command:

gcloud compute firewall-rules create myservernetwork1-icmp \
  --allow icmp \
  --network myservernetwork1 \
  --source-ranges 10.0.0.0/8

I’ve used /8 mask because I wanted to cover all addresses in my network. Check your GCP firewall settings to make sure those are correct.