1
votes

I'm on google kubernetes engine, and I need to run the filebeat daemonset found (https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html). I create the cluster with:

gcloud container clusters create test_cluster \
     --cluster-version "1.9.6-gke.1" \
     --node-version "1.9.6-gke.1" \
     --zone "us-east1-c" \
     --machine-type n1-standard-4 \
     --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.full_control","https://www.googleapis.com/auth/sqlservice.admin","https://www.googleapis.com/auth/log    ging.write","https://www.googleapis.com/auth/pubsub","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/tra    ce.append" \
     --num-nodes "1" \
     --network "main-network" \
     --subnetwork "main-subnetwork" \
     --no-enable-cloud-monitoring \
     --no-enable-cloud-logging \
     --no-enable-legacy-authorization \
     --disk-size "50"

When I have --cluster-version and --node-version set to 1.8.8-gke.0 it works but when I change it to 1.9.6-gke.1 the filebeat pod can't reach my GCE instance that's running logstash.

Both the cluster and the GCE instance are running on the same network and I'm sure it's not a firewall issue with google cloud because if I gcloud compute ssh into the GKE instance and do nc -vz -w 5 10.0.0.18 5044 it connects fine.

When I have the cluster running 1.8.8-gke.0, the filebeat pod connects fine to logstash and running traceroute 10.0.0.18 shows it connecting fine. When I create the cluster with 1.9.6-gke.1 then traceroute 10.0.0.18 shows the following:

[root@filebeat-56wtj filebeat]# traceroute 10.0.0.18 
traceroute to 10.0.0.18 (10.0.0.18), 30 hops max, 60 byte packets
 1  gateway (10.52.0.1)  0.063 ms  0.016 ms  0.012 ms
 2  * * *
 3  * * *
 4  * * *
 5  * * *
 6  * * *

edit: Note this isn't specific to the filebeat container, I tried it with another container and it also can't reach a GCE instance.

1

1 Answers

2
votes

As you can read here [1]: "Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.

You can replicate the behavior of older clusters (1.8.x and earlier) by setting a new firewall rule on your cluster."

[1] https://cloud.google.com/kubernetes-engine/release-notes#known-issues