1
votes

I have created two seperate GKE clusters on K8s 1.14.10.

VPN access to in-house network not working after GKE cluster upgrade to 1.14.6

I have followed this and the IP masquerading agent documentation. I have tried to test this using a client pod and server pod to exchange messages. I'm using Internal node IP to send message and created a ClusterIP to expose the pods.

I have allowed requests for every instance in firewall rules for ingress and egress i.e 0.0.0.0/0. Pic:This is the description of the cluster which I have created The config map of the IP masquerading agent stays the same as in the documentation. I'm able to ping the other node from within the pod but curl request says connection refused and tcpdump shows no data.

Problem: I need to communicate from cluster A to cluster B in gke 1.14 with ipmasquerading set to true. I either get connection refused or i/o timeout. I have tried using internal and external node IPs as well as using a loadbalancer.

1
what's the curl command. against clusterIP, what port? is that port forwarding to the right target port?suren
I have exposed 8080 as the designated port. And in the code of the server I'm listening to port 8080 itself.Sourav C
Can you clarify what you issue is? Are you connecting over a VPN? Please note if you are trying to reach a clusterIP service in cluster B from cluster A, this will not work. ClusterIP is only accessible from within the cluster where it is hostedPatrick W
I tried this using a LoadBalancer and tried connecting it with the external IP. I still get connection refused.Sourav C
How did you enable IP masquerading agent? Did you enable Network Policy, set Pod range or both? Did you create it with cluster or you update this cluster with this settings? You want to communicate from Cluster A to Cluster B with any specific scenario? You are connecting form Cluster A to cluster B using NodeIP:NodePort, service? What exactly you want to achieve and using what? Are you able to provide some Config YAMLs (Service,deployments,etc)?PjoterS

1 Answers

0
votes

You have provided quite general information and without details I cannot provide specific scenario answer. It might be related to how did you create clusters or other firewalls settings. Due to that I will provide correct steps to creation and configuration 2 clusters with firewall and masquerade. Maybe you will be able to find which step you missed or misconfigured.

Clusters configuration (node,pods,svc) are on the bottom of the answer.

1. Create VPC and 2 clusters

In docs it says about 2 different projects but you can do it in one project. Good example of VPC creation and 2 clusters can be found in GKE docs. Create VPC and Crate 2 clusters. In cluster Tier1 you can enable NetworkPolicy now instead of enabling it later. After that you will need to create Firewall Rules. You will also need to add ICMP protocol to firewall rule.

At this point you should be able to ping between nodes from 2 clusters.

For additional Firewall rules (allowing connection between pods, svc, etc) please check this docs.

2. Enable IP masquerade agent

As mentioned in docs, to run IPMasquerade:

The ip-masq-agent DaemonSet is automatically installed as an add-on with --nomasq-all-reserved-ranges argument in a GKE cluster, if one or more of the following is true:

The cluster has a network policy.

OR

The Pod's CIDR range is not within 10.0.0.0/8.

It mean that tier-2-cluster already have ip-masq-agent in kube-system namespace (because The Pod's CIDR range is not within 10.0.0.0/8.). And if you enabled NetworkPolicy during creation of tier-1-cluster it should be have also installed. If not, you will need to enable it using command:

$ gcloud container clusters update tier-1-cluster --update-addons=NetworkPolicy=ENABLED --zone=us-central1-a

To verify if everything is ok you have to check if Daemonset ip-masq-agent pods were created. (Each pod for node).

$ kubectl get ds ip-masq-agent -n kube-system
NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                                 AGE
ip-masq-agent   3         3         3       3            3           beta.kubernetes.io/masq-agent-ds-ready=true   168m

If you will SSH to any of your nodes you will be able to see default iptables entries.

$ sudo iptables -t nat -L IP-MASQ
Chain IP-MASQ (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             169.254.0.0/16       /* ip-masq: local traffic is not subject to MASQUERADE */
RETURN     all  --  anywhere             10.0.0.0/8           /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             172.16.0.0/12        /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             192.168.0.0/16       /* ip-masq: RFC 1918 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             240.0.0.0/4          /* ip-masq: RFC 5735 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             192.0.2.0/24         /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             198.51.100.0/24      /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             203.0.113.0/24       /* ip-masq: RFC 5737 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             100.64.0.0/10        /* ip-masq: RFC 6598 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             198.18.0.0/15        /* ip-masq: RFC 6815 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             192.0.0.0/24         /* ip-masq: RFC 6890 reserved range is not subject to MASQUERADE */
RETURN     all  --  anywhere             192.88.99.0/24       /* ip-masq: RFC 7526 reserved range is not subject to MASQUERADE */
MASQUERADE  all  --  anywhere             anywhere             /* ip-masq: outbound traffic is subject to MASQUERADE (must be last in chain) */

3. Deploy test application

I've used Hello application from GKE docs and deployed on both Clusters. In addition I have also deployed ubuntu image for tests.

4. Apply proper configuration for IPMasquerade This config need to be on the source cluster.

In short, if destination CIDR is in nonMasqueradeCIDRs:, it will show it internal IP, otherwise it will show NodeIP as source.

Save to file config below text:

nonMasqueradeCIDRs:
  - 10.0.0.0/8
resyncInterval: 2s
masqLinkLocal: true

Create IPMasquarade ConfigMap

$ kubectl create configmap ip-masq-agent --from-file config --namespace kube-system

It will overwrite iptables configuration

$ sudo iptables -t nat -L IP-MASQ
Chain IP-MASQ (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             10.0.0.0/8           /* ip-masq-agent: local traffic is not subject to MASQUERADE */
MASQUERADE  all  --  anywhere             anywhere             /* ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */

5. Tests:

When IP is Masqueraded

SSH to Node form Tier2 cluster and run:

sudo toolbox bash
apt-get update
apt install -y tcpdump

Now you should listen using command below. Port 32502 is NodePort service from Tier 2 Cluster

tcpdump -i eth0 -nn -s0 -v port 32502

In Cluster Tier1 you need to enter ubuntu pod and curl NodeIP:NodePort

$ kubectl exec -ti ubuntu -- bin/bash 

You will need to install curl apt-get install curl.

curl NodeIP:NodePort (Node which is listening, NodePort from service from Cluster Tier 2).

CLI:

root@ubuntu:/# curl 172.16.4.3:32502
Hello, world!
Version: 2.0.0
Hostname: hello-world-deployment-7f67f479f5-h4wdm

On Node you can see entry like:

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
12:53:30.321641 IP (tos 0x0, ttl 63, id 25373, offset 0, flags [DF], proto TCP (6), length 60)
    10.0.4.4.56018 > 172.16.4.3.32502: Flags [S], cksum 0x8648 (correct), seq 3001889856

10.0.4.4 is NodeIP where Ubuntu pod is located.

When IP was not Masqueraded

Remove ConfigMap from Cluster Tier 1

$ kubectl delete cm ip-masq-agent -n kube-system

Change in file config CIDR to 172.16.4.0/22 which is Tier 2 nodes pool and reapply CM

$ kubectl create configmap ip-masq-agent --from-file config --namespace kube-system

SSH to any node from Tier 1 to check if iptables rules were changed.

sudo iptables -t nat -L IP-MASQ
Chain IP-MASQ (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             172.16.4.0/22        /* ip-masq-agent: local traffic is not subject to MASQUERADE */
MASQUERADE  all  --  anywhere             anywhere             /* ip-masq-agent: outbound traffic is subject to MASQUERADE (must be last in chain) */

Now for test I have again used Ubuntu pod and curl the same ip like before.

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:16:50.316234 IP (tos 0x0, ttl 63, id 53160, offset 0, flags [DF], proto TCP (6), length 60)
    10.4.2.8.57876 > 172.16.4.3.32502

10.4.2.8 is internal IP of Ubuntu pod.

Configuration for Tests:

TIER1

NAME                                          READY   STATUS    RESTARTS   AGE   IP         NODE                                            NOMINATED NODE   READINESS GATES
pod/hello-world-deployment-7f67f479f5-b2qqz   1/1     Running   0          15m   10.4.1.8   gke-tier-1-cluster-default-pool-e006097b-5tnj   <none>           <none>
pod/hello-world-deployment-7f67f479f5-shqrt   1/1     Running   0          15m   10.4.2.5   gke-tier-1-cluster-default-pool-e006097b-lfvh   <none>           <none>
pod/hello-world-deployment-7f67f479f5-x7jvr   1/1     Running   0          15m   10.4.0.8   gke-tier-1-cluster-default-pool-e006097b-1wbf   <none>           <none>
ubuntu                                    1/1     Running   0          91s   10.4.2.8   gke-tier-1-cluster-default-pool-e006097b-lfvh   <none>           <none>

NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)           AGE    SELECTOR
service/hello-world   NodePort    10.0.36.46   <none>        60000:31694/TCP   14m    department=world,greeting=hello
service/kubernetes    ClusterIP   10.0.32.1    <none>        443/TCP           115m   <none>

NAME                                                 STATUS   ROLES    AGE    VERSION           INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
node/gke-tier-1-cluster-default-pool-e006097b-1wbf   Ready    <none>   115m   v1.14.10-gke.36   10.0.4.2      35.184.38.21    Container-Optimized OS from Google   4.14.138+        docker://18.9.7
node/gke-tier-1-cluster-default-pool-e006097b-5tnj   Ready    <none>   115m   v1.14.10-gke.36   10.0.4.3      35.184.207.20   Container-Optimized OS from Google   4.14.138+        docker://18.9.7
node/gke-tier-1-cluster-default-pool-e006097b-lfvh   Ready    <none>   115m   v1.14.10-gke.36   10.0.4.4      35.226.105.31   Container-Optimized OS from Google   4.14.138+        docker://18.9.7<none>   100m   v1.14.10-gke.36   10.0.4.4      35.226.105.31   Container-Optimized OS from Google   4.14.138+        docker://18.9.7

TIER2

$ kubectl get pods,svc,nodes -o wide
NAME                                          READY   STATUS    RESTARTS   AGE   IP           NODE                                            NOMINATED NODE   READINESS GATES
pod/hello-world-deployment-7f67f479f5-92zvk   1/1     Running   0          12m   172.20.1.5   gke-tier-2-cluster-default-pool-57b1cc66-xqt5   <none>           <none>
pod/hello-world-deployment-7f67f479f5-h4wdm   1/1     Running   0          12m   172.20.1.6   gke-tier-2-cluster-default-pool-57b1cc66-xqt5   <none>           <none>
pod/hello-world-deployment-7f67f479f5-m85jn   1/1     Running   0          12m   172.20.1.7   gke-tier-2-cluster-default-pool-57b1cc66-xqt5   <none>           <none>

NAME                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE    SELECTOR
service/hello-world   NodePort    172.16.24.206   <none>        60000:32502/TCP   12m    department=world,greeting=hello
service/kubernetes    ClusterIP   172.16.16.1     <none>        443/TCP           113m   <none>

NAME                                                 STATUS   ROLES    AGE    VERSION           INTERNAL-IP   EXTERNAL-IP      OS-IMAGE                             KERNEL-VERSION   CONTAINER-RUNTIME
node/gke-tier-2-cluster-default-pool-57b1cc66-84ng   Ready    <none>   112m   v1.14.10-gke.36   172.16.4.2    35.184.118.151   Container-Optimized OS from Google   4.14.138+        docker://18.9.7

node/gke-tier-2-cluster-default-pool-57b1cc66-mlmn   Ready    <none>   112m   v1.14.10-gke.36   172.16.4.3    35.238.231.160   Container-Optimized OS from Google   4.14.138+        docker://18.9.7

node/gke-tier-2-cluster-default-pool-57b1cc66-xqt5   Ready    <none>   112m   v1.14.10-gke.36   172.16.4.4    35.202.94.194    Container-Optimized OS from Google   4.14.138+        docker://18.9.7