0
votes

As a further question of this question I want to know how I can reach my external service (elasticsearch) from inside a kubernetes pod (fluentd) if the external service is not reachable via internet but only from the host-network where also my kubernetes is hosted.

Here is the external service kubernetes object I applied:

kind: Service
apiVersion: v1
metadata:
  name: ext-elastic
  namespace: kube-system
spec:
  type: ExternalName
  externalName: 192.168.57.105
  ports:
  - port: 9200

So now I have this service:

ubuntu@controller:~$ kubectl get svc -n kube-system
NAME          TYPE           CLUSTER-IP   EXTERNAL-IP      PORT(S)         AGE
ext-elastic   ExternalName   <none>       192.168.57.105   9200/TCP        2s
kube-dns      ClusterIP      10.96.0.10   <none>           53/UDP,53/TCP   1d

The elasticsearch is there:

ubuntu@controller:~$ curl 192.168.57.105:9200
{
  "name" : "6_6nPVn",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "ZmxYHz5KRV26QV85jUhkiA",
  "version" : {
    "number" : "6.2.3",
    "build_hash" : "c59ff00",
    "build_date" : "2018-03-13T10:06:29.741383Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

But from my fluentd-pod I can neither resolve the service-name in an nslookup nor ping the simple IP. Those commands are both not working:

ubuntu@controller:~$ kubectl exec fluentd-f5dks -n kube-system ping 192.168.57.105
ubuntu@controller:~$ kubectl exec fluentd-f5dks -n kube-system nslookup ext-elastic

Here is the description about my network-topology:

The VM where my elasticsearch is on has 192.168.57.105 and the VM where my kubernetes controller is on has 192.168.57.102. As shown above, the connection works well.

The controller-node has also the IP 192.168.56.102. This is the network in which he is together with the other worker-nodes (also VMs) of my kubernetes-cluster.

My fluentd-pod is seeing himself as 172.17.0.2. It can easily reach the 192.168.56.102 but not the 192.168.57.102 although it is it's host and also one and the same node.

Edit

The routing table of the fluentd-pod looks like this:

ubuntu@controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         10.244.0.1      0.0.0.0         UG    0      0        0 eth0
10.244.0.0      *               255.255.255.0   U     0      0        0 eth0

The /etc/resolc.conf of the fluentd-pod looks like this:

ubuntu@controller:~$ kubectl exec -ti fluentd-5lqns -n kube-system -- cat /etc/resolv.conf nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5

The routing table of the VM that is hosting the kubernetes controller and can reach the desired elasticsearch service looks like this:

ubuntu@controller:~$ route
Kernel-IP-Routentabelle
Ziel            Router          Genmask         Flags Metric Ref    Use Iface
default         10.0.2.2        0.0.0.0         UG    0      0        0 enp0s3
10.0.2.0        *               255.255.255.0   U     0      0        0 enp0s3
10.244.0.0      *               255.255.255.0   U     0      0        0 kube-bridge
10.244.1.0      192.168.56.103  255.255.255.0   UG    0      0        0 enp0s8
10.244.2.0      192.168.56.104  255.255.255.0   UG    0      0        0 enp0s8
172.17.0.0      *               255.255.0.0     U     0      0        0 docker0
192.168.56.0    *               255.255.255.0   U     0      0        0 enp0s8
192.168.57.0    *               255.255.255.0   U     0      0        0 enp0s9
1
Could you execute shell into your pod and print local routes? Also check the content of the /etc/resolv.conf file. Basically your pod should have route to the endpoint IP or the default route to router who can redirect this traffic to the destination. The destination endpoint should also have route(or default route) to the source of the traffic to be able to send a reply.VASャ
Thank you!! I added the information above! I think I will go with your second solution to route it via the host. As soon as I found the correct rules, I will post them here.Verena I.
I gave up on this problem. My workaround is to install a standalone td-agent on my controller instance, that is forwarding the kubernetes logs to the elasticsearch instance.Verena I.

1 Answers

0
votes

Basically, your pod should have a route to the endpoint IP or the default route to the router which can redirect this traffic to the destination.

The destination endpoint should also have the route(or default route) to the source of the traffic to be able to send a reply.

Check out this article for details about routing in AWS cloud as an example.

In a general sense, a route table tells network packets which way they need to go to get to their destination. Route tables are managed by routers, which act as “intersections” within the network — they connect multiple routes together and contain helpful information for getting traffic to its final destination. Each AWS VPC has a VPC router. The primary function of this VPC router is to take all of the route tables defined within that VPC, and then direct the traffic flow within that VPC, as well as to subnets outside of the VPC, based on the rules defined within those tables.

Route tables consist of a list of destination subnets, as well as where the “next hop” is to get to the final destination.