9
votes

I have a Kubernetes service on GKE as follows:

$ kubectl describe service staging
Name:           staging
Namespace:      default
Labels:         <none>
Selector:       app=jupiter
Type:           NodePort
IP:             10.11.246.27
Port:           <unnamed>   80/TCP
NodePort:       <unnamed>   31683/TCP
Endpoints:      10.8.0.33:1337
Session Affinity:   None
No events.

I can access the service from a VM directly via one of its endpoints (10.8.0.21:1337) or via the node port (10.240.251.174:31683 in my case). However, if I try to access 10.11.246.27:80, I get nothing. I've also tried ports 1337 and 31683.

Why can't I access the service via its IP? Do I need a firewall rule or something?

1

1 Answers

11
votes

Service IPs are virtual IPs managed by kube-proxy. So, in order for that IP to be meaningful, the client must also be a part of the kube-proxy "overlay" network (have kube-proxy running, pointing at the same apiserver).

Pod IPs on GCE/GKE are managed by GCE Routes, which is more like an "underlay" of all VMs in the network.

There are a couple of ways to access non-public services from outside the cluster. Here they are in more detail, but in short:

  1. Create a bastion GCE route for your cluster's services.
  2. Install your cluster's kube-proxy anywhere you want to access the cluster's services.