1
votes

I noticed every node in a cluster has an external IP assigned to it. That seems to be the default behavior of Google Kubernetes Engine.

I thought the nodes in my cluster should be reachable from the local network only (through its virtual IPs), but I could even connect directly to a mongo server running on a pod from my home computer just by connecting to its hosting node (without using a LoadBalancer).

I tried to make Container Engine not to assign external IPs to newly created nodes by changing the cluster instance template settings (changing property "External IP" from "Ephemeral" to "None"). But after I did that GCE was not able to start any pods (Got "Does not have minimum availability" error). The new instances did not even show in the list of nodes in my cluster.

After switching back to the default instance template with external IP everything went fine again. So it seems for some reason Google Kubernetes Engine requires cluster nodes to be public.

Could you explain why is that and whether there is a way to prevent GKE exposing cluster nodes to the Internet? Should I set up a firewall? What rules should I use (since nodes are dynamically created)?

I think Google not allowing private nodes is kind of a security issue... Suppose someone discovers a security hole on a database management system. We'd feel much more comfortable to work on fixing that (applying patches, upgrading versions) if our database nodes are not exposed to the Internet.

4

4 Answers

2
votes

GKE recently added a new feature allowing you to create private clusters, which are clusters where nodes do not have public IP addresses.

0
votes

This is how GKE is designed and there is no way around it that I am aware of. There is no harm in running kubernetes nodes with public IPs, and if these are the IPs used for communication between nodes you can not avoid it.

As for your security concern, if you run that example DB on kubernetes, even if you go for public IP it would not be accessible, as this would be only on the internal pod-to-pod networking, not the nodes them selves.

0
votes

As described in this article, you can use network tags to identify which GCE VMs or GKE clusters are subject to certain firewall rules and network routes.

For example, if you've created a firewall rule to allow traffic to port 27017, 27018, 27019, which are the default TCP ports used by MongoDB, give the desired instances a tag and then use that tag to apply the firewall rule that allows those ports access to those instances.

Also, it is possible to create GKE cluster with applying the GCE tags on all nodes in the new node pool, so the tags can be used in firewall rules to allow/deny desired/undesired traffic to the nodes. This is described in this article under --tags flag.

0
votes

Kubernetes Master is running outside your network and it needs to access your nodes. This could the the reason for having public IPs.

When you create your cluster, there are some firewall rules created automatically. These are required by the cluster, and there's e.g. ingress from master and traffic between the cluster nodes.

Network 'default' in GCP has readymade firewall rules in place. These enable all SSH and RDP traffic from internet and enable pinging of your machines. These you can remove without affecting the cluster and your nodes are not visible anymore.