0
votes

I have a working GKE cluster serving content at port 80. How do I get the load balancer service to deliver the content on the external (regional reserved) static IP 111.222.333.123?

I see that kubectl get service shows that the external static IP is successfully registered. The external IP does respond to ping requests.

NAME            TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes      ClusterIP      10.16.0.1     <none>        443/TCP        17h
myapp-cluster   NodePort       10.16.5.168   <none>        80:30849/TCP   84m
myapp-service   LoadBalancer   10.16.9.255   111.222.333.123   80:30879/TCP   6m20s

Additionally, the Google Cloud Platform console shows that the forwarding rule is established and correctly referencing the GKE target pool.

The deployment and service manifest I am using is shown below:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  labels:
    app: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
      environment: sandbox
  template:
    metadata:
      labels:
        app: myapp
        environment: sandbox
    spec:
      containers:
        - name: myapp
          image: myapp
          imagePullPolicy: Always 
          ports:
            - containerPort: 8080
      restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
    environment: sandbox
  ports:
    - port: 80
      targetPort: 8080
  type: LoadBalancer
  loadBalancerIP: "111.222.333.123" 

The associated skaffold configuration file for reference:

apiVersion: skaffold/v2beta18
kind: Config
metadata:
  name: myapp
build:
  artifacts:
  - image: myapp
    context: .
    docker: {}
deploy:
  kubectl:
    manifests:
    - gcloud_k8_staticip_deployment.yaml

What am I missing to allow traffic to reach the GKE cluster when running this configuration using Google Cloud Code?

Apologies if this has been asked before. Happy to take a pointer if I missed the right solution reviewing questions.

2
i am not an export on GKE, but I think that your service is currently listening to 111.222.333.123:30879 as this is the port that has been allocated for your loadbalancer service. usually the workflow is setting up an ingress controller on kubernetes. expose it via loadbalancer type service. this should create a google loadbalancer pointing to the ingress controller. by then defining ingress resources routing traffic to your internal service your application should be accessible by "loadbalancer-url/path-defined-in-ingress"meaningqo
I am specifically trying to avoid Ingress and using the "Use a Service" method described here: cloud.google.com/kubernetes-engine/docs/tutorials/…RndmSymbl
i am not sure if i can help you a lot then. however, a loadbalancer should be created by google for your due to type LoadBalancer of your service. can you confirm this? and try accessing it?meaningqo
Yes the load balancer is up and running the IP is there and externally routed. Google Cloud Platform is also creating the correct references linking the external IP with the internal GKE. However, still no HTTP traffic is passing through the network.RndmSymbl
hmmm unfortunately i am at a loss here then. time for some GKE experts to save the day i guess. sorry that i couldn't be of more helpmeaningqo

2 Answers

0
votes

I replicated your setup and faced the same issue as yours (was able to ping the service IP but couldn’t connect to it from the browser).

Then I changed Deployment container port to 80, service target port to 80 and service port to 8080, and it worked, I was then able to connect to the deployment from the browser using the service IP.

Deployment manifest file : From the deployment manifest file :

Service manifest file: From the service manifest file

0
votes

For all I know the quoted configuration in this question should actually work, as long as the image is pointing to an accessible location. I have confirmed this configuration to be working using a toy setup entirely without IDE, just using gcloud shell and everything worked well.

The problem originates from Google Cloud Code changing the kubectl context without any additional warning when a context switch is configured in the run configuration.