As a further question of this question I want to know how I can reach my external service (elasticsearch) from inside a kubernetes pod (fluentd) if the external service is not reachable via internet but only from the host-network where also my kubernetes is hosted.
Here is the external service kubernetes object I applied:
kind: Service
apiVersion: v1
metadata:
name: ext-elastic
namespace: kube-system
spec:
type: ExternalName
externalName: 192.168.57.105
ports:
- port: 9200
So now I have this service:
ubuntu@controller:~$ kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ext-elastic ExternalName <none> 192.168.57.105 9200/TCP 2s
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
The elasticsearch is there:
ubuntu@controller:~$ curl 192.168.57.105:9200
{
"name" : "6_6nPVn",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "ZmxYHz5KRV26QV85jUhkiA",
"version" : {
"number" : "6.2.3",
"build_hash" : "c59ff00",
"build_date" : "2018-03-13T10:06:29.741383Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
But from my fluentd-pod I can neither resolve the service-name in an nslookup nor ping the simple IP. Those commands are both not working:
ubuntu@controller:~$ kubectl exec fluentd-f5dks -n kube-system ping 192.168.57.105
ubuntu@controller:~$ kubectl exec fluentd-f5dks -n kube-system nslookup ext-elastic
Here is the description about my network-topology:
The VM where my elasticsearch is on has 192.168.57.105
and the VM where my kubernetes controller is on has 192.168.57.102
. As shown above, the connection works well.
The controller-node has also the IP 192.168.56.102
. This is the network in which he is together with the other worker-nodes (also VMs) of my kubernetes-cluster.
My fluentd-pod is seeing himself as 172.17.0.2
. It can easily reach the 192.168.56.102
but not the 192.168.57.102
although it is it's host and also one and the same node.
Edit
The routing table of the fluentd-pod looks like this:
ubuntu@controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.244.0.1 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 * 255.255.255.0 U 0 0 0 eth0
The /etc/resolc.conf
of the fluentd-pod looks like this:
ubuntu@controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- cat /etc/resolv.conf nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5
The routing table of the VM that is hosting the kubernetes controller and can reach the desired elasticsearch service looks like this:
ubuntu@controller:~$ route
Kernel-IP-Routentabelle
Ziel Router Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 enp0s3
10.0.2.0 * 255.255.255.0 U 0 0 0 enp0s3
10.244.0.0 * 255.255.255.0 U 0 0 0 kube-bridge
10.244.1.0 192.168.56.103 255.255.255.0 UG 0 0 0 enp0s8
10.244.2.0 192.168.56.104 255.255.255.0 UG 0 0 0 enp0s8
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.56.0 * 255.255.255.0 U 0 0 0 enp0s8
192.168.57.0 * 255.255.255.0 U 0 0 0 enp0s9