1
votes

I'm trying access elasticsearch cluster on GKE from my project in GAE - flexible. Since I don't want an external load-balancer, I'm following this guide: https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing Both GKE and GAE are deployed in the same region, but the calls to the elasticsearch cluster timeout all the time. Has anyone done this and can share some tips would be much appreciated!

My service.yaml file looks like this:

apiVersion: v1
kind: Service
metadata:
  name: internalloadbalancerservice
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
  labels:
    app.kubernetes.io/component: elasticsearch-server
    app.kubernetes.io/name: elasticsearch  #label selector service
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:   # restrict access
  - xxxxxxxx
  ports:
  - name: myport
    port: 9000
    protocol: TCP # default; can also specify UDP
  selector:
    app.kubernetes.io/name : elasticsearch # label selector for Pods
    app.kubernetes.io/component: elasticsearch-server
3

3 Answers

4
votes

GCP now has a beta Global Access feature with Internal Load balancers which will allow the internal load balancers to be accessible from any region within the same network.

This will be helpful for your case too. If two services are exposed using internal IP addresses but located in different regions.

UPDATE

Global Access feature is now stable (for GKE 1.16.x and above) and it can be enabled by adding the below annotation to your service.

networking.gke.io/internal-load-balancer-allow-global-access: "true"

For Example: The below manifest will create your internalloadbalancerservice LoadBalancer with internal IP address and that IP will be accessible from any region within the same VPC.

apiVersion: v1
kind: Service
metadata:
  name: internalloadbalancerservice
  annotations:
    cloud.google.com/load-balancer-type: "Internal"

    # Required to enable global access
    networking.gke.io/internal-load-balancer-allow-global-access: "true"

  labels:
    app.kubernetes.io/component: elasticsearch-server
    app.kubernetes.io/name: elasticsearch  #label selector service
spec:
  type: LoadBalancer
  loadBalancerSourceRanges:   # restrict access
  - xxxxxxxx
  ports:
  - name: myport
    port: 9000
    protocol: TCP # default; can also specify UDP
  selector:
    app.kubernetes.io/name : elasticsearch # label selector for Pods
    app.kubernetes.io/component: elasticsearch-server

This works well for GKE 1.16.x and above. For older GKE versions, you can refer to this answer.

2
votes

To save anyone else from a similar situation, I will share my findings of why I couldn't connect to my GKE app from GAE. The GAE was in region europe-west, while GKE was in region europe-west-4a. I thought that would be the same region. But changing GKE region to europe-west-1b worked. Not very obvious but when reading the documentation GAE region europe-west and GKE region europe-west-1b are both in Belgium.

0
votes

Assuming that the GAE app and the GKE cluster are in the same region, and in the same VPC network, I would suggest to make sure you have created Ingress allow firewall rules that apply to the GKE nodes as targets with the GAE app VMs as sources.

Remember Ingress to VMs is denied by the implied deny Ingress rule. So unless you create Ingress allow firewall rules, you'll not be able to send packets to any VMs. And to use an Internal Load Balancing (ILB), both the client and the backend VMs must be in the same:
- Region
- VPC network
- Project