I have created a Kubernetes Cluster on Google Cloud using GKE service.
The GCP Environment has a VPC which is connected to the on-premises network using a VPN. The GKE Cluster is created in a subnet, say subnet1, in the same VPC. The VMs in the subnet1 are able to communicate to an on-premises endpoint on its internal(private) ip address. The complete subnet's ip address range(10.189.10.128/26) is whitelisted in the on-premises firewall.
The GKE Pods use the ip addresses out of the secondary ip address assigned to them(10.189.32.0/21). I did exec in one of the pods and tried to hit the on-premise network but was not able to get a response. When i checked the network logs, i found that the source ip was Pod's IP(10.189.37.18) which was used to communicate with the on-premises endpoint(10.204.180.164). Where as I want that the Pod should use the Node's IP Address to communicate to the on-premises endpoint.
There is a deployment done for the Pods and the deployment is exposed as a ClusterIP Service. This Service is attached to a GKE Ingress.