2
votes

I have a VPC network with a subnet in the range 10.100.0.0/16, in which the nodes reside. There is a route and firewall rules applied to the range 10.180.102.0/23, which routes and allows traffic going to/coming from a VPN tunnel.

If I deploy a node in the 10.100.0.0/16 range, I can ping my devices in the 10.180.102.0/23 range. However, the pod running inside that node cannot ping the devices in the 10.180.102.0/23 range. I assume it has to do with the fact that the pods live in a different IP range(10.12.0.0/14).

How can I configure my networking so that I can ping/communicate with the devices living in the 10.180.102.0/23 range?

2
What is your cluster configuration? Did you have activate the option --enable-ip-alias? What is your --cluster-ipv4-cidr param? Did you have activate secondary IP range? If so, with which values?guillaume blaquiere
@guillaumeblaquiere I have configured the cluster and node pool through the GCP console, not through gcloud, but running gcloud container clusters describe <cluster>, gives me that the clusterIpv4Cidr: 10.12.0.0/14 and ipAllocationPolicy: clusterIpv4Cidr: 10.12.0.0/14 clusterIpv4CidrBlock: 10.12.0.0/14 clusterSecondaryRangeName: gke-XXX-stack-cluster-pods-f28f6de4 servicesIpv4Cidr: 10.142.0.0/20 servicesIpv4CidrBlock: 10.142.0.0/20 servicesSecondaryRangeName: gke-XXX-stack-cluster-services-f28f6de4 useIpAliases: trueHenke

2 Answers

8
votes

I don't quite remember exactly how to solve, but I'm posting what I have to help @tdensmore.

You have to edit the ip-masq-agent(which is an agent running on GKE that masquerades the IPs) and this configuration is responsible for letting the pods inside the nodes, reach other parts of the GCP VPC Network, more specifically the VPN. So, it allows pods to communicate with the devices that are accessible through the VPN.

First of all we're gonna be working inside the kube-system namespace, and we're gonna put the configmap that configures our ip-masq-agent, put this in a config file:

nonMasqueradeCIDRs:
  - 10.12.0.0/14  # The IPv4 CIDR the cluster is using for Pods (required)
  - 10.100.0.0/16 # The IPv4 CIDR of the subnetwork the cluster is using for Nodes (optional, works without but I guess its better with it)
masqLinkLocal: false
resyncInterval: 60s

and run kubectl create configmap ip-masq-agent --from-file config --namespace kube-system

afterwards, configure the ip-masq-agent, put this in a ip-masq-agent.yml file:

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: ip-masq-agent
  namespace: kube-system
spec:
  template:
    metadata:
      labels:
        k8s-app: ip-masq-agent
    spec:
      hostNetwork: true
      containers:
      - name: ip-masq-agent
        image: gcr.io/google-containers/ip-masq-agent-amd64:v2.4.1
        args:
            - --masq-chain=IP-MASQ
            # To non-masquerade reserved IP ranges by default, uncomment the line below.
            # - --nomasq-all-reserved-ranges
        securityContext:
          privileged: true
        volumeMounts:
          - name: config
            mountPath: /etc/config
      volumes:
        - name: config
          configMap:
            # Note this ConfigMap must be created in the same namespace as the daemon pods - this spec uses kube-system
            name: ip-masq-agent
            optional: true
            items:
              # The daemon looks for its config in a YAML file at /etc/config/ip-masq-agent
              - key: config
                path: ip-masq-agent
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      - key: "CriticalAddonsOnly"
        operator: "Exists"

and run kubectl -n kube-system apply -f ip-masq-agent.yml

Note: this has been a long time since I've done this, there are more infos in this link: https://cloud.google.com/kubernetes-engine/docs/how-to/ip-masquerade-agent

1
votes

I'd like to start with some terminologies of IP addresses in GKE.

Network namespace: Based on the MAN page, a network namespace is logically another copy of the network stack, with its own routes, firewall rules, and network devices. This network namespace connects the node's physical network interface with the Pod. This network namespace is also connected to a Linux bridge allowing communication among pods on the same node and outside communication.

Pod IP: IP address assigned to a Pod and configurable during the Cluster creation within Pod Address Range option. GKE assign this IP to the virtual network interface in the Pod's network namespace and routed to the node's physical network interface, such as eth0.

Node IP: IP address assigned to the physical network interface of a Node as eth0. This Node IP is configured on the network namespace to communicate with the pods.

Cluster IP: IP address assigned and stable for the lifetime of the service. Using the network namespace to allow communication between nodes and external network.

Here's the source of my information; GKE Network Overview where I also found this note:

Warning: Do not manually make changes to nodes because they are overridden by GKE, and your cluster may not function correctly. The only reason to access a node directly is to debug problems with your configuration.


Then if you looking to establish communication between your GKE cluster and another network, I would suggest the different services:

External Load Balancers manage traffic coming from outside the cluster and outside your Google Cloud Virtual Private Cloud (VPC) network. They use forwarding rules associated with the Google Cloud network to route traffic to a Kubernetes node.

Internal Load Balancers manage traffic coming from within the same VPC network. Like external load balancers, they use forwarding rules associated with the Google Cloud network to route traffic to a Kubernetes node.

HTTP(S) Load Balancers are specialized external load balancers used for HTTP(S) traffic. They use an Ingress resource rather than a forwarding rule to route traffic to a Kubernetes node.

You can find more details on the different services in this documentation.

In big picture, a pod cannot communicate directly with external resource. You should use a service and expose the pod to the service.