13
votes

Can anybody let me know how can we access the service deployed on one pod via another pod in a kubernetes cluster?

Example:

There is a nginx service which is deployed on Node1 (having pod name as nginx-12345) and another service which is deployed on Node2 (having pod name as service-23456). Now if 'service' wants to communicate with 'nginx' for some reason, then how can we access 'nginx' inside the 'service-23456' pod?

5
you are not explaining yourself properly. What is a 'service' for you? Kubernetes has flat networking by default, so all pods and nodes can talk to each other, no matter their namespaces.suren
I meant about any random service. Services are just a mechanism of accessing the deployments . The comments in the below section clearly describe the issue. What I want to know is that if any service such as nginx is deployed on a pod (say pod 1) and another service named eureka is deployed on second pod (say pod 2), then how can we access nginx from pod 2 ? I am able to access services in the master server but I am not able to access in the corresponding pods.Aditya Datta
OK. So, as I said, k8s networking is flat, so you should be able to talk from one pod to another. How did you create the cluster? If you followed any doc, can you paste is here?suren

5 Answers

8
votes

There are various ways to access the service in kubernetes, you can expose your services through NodePort or LoadBalancer and access it outside the cluster.

See the official documentation of how to access the services.

Kubernetes official document states that:

Some clusters may allow you to ssh to a node in the cluster. From there you may be able to access cluster services. This is a non-standard method, and will work on some clusters but not others. Browsers and other tools may or may not be installed. Cluster DNS may not work.

So access a service directly from other node is dependent on which type of Kubernetes cluster you're using.

EDIT:

Once the service is deployed in your cluster you should be able to contact the service using its name, and Kube-DNS will answer with the correct ClusterIP to speak to your final pods. ClusterIPs are governed by IPTables rules created by kube-proxy on Workers that NAT your request to the final container’s IP.

The Kube-DNS naming convention is service.namespace.svc.cluster-domain.tld and the default cluster domain is cluster.local.

For example, if you want to contact a service called mysql in the db namespace from any namespace, you can simply speak to mysql.db.svc.cluster.local.

If this is not working then there might be some issue with kube-dns in your cluster. Hope this helps.

EDIT2 : There are some known issue with dns resolution in ubuntu, Kubernetes official document states that

Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). Systemd-resolved moves and replaces /etc/resolv.conf with a stub file that can cause a fatal forwarding loop when resolving names in upstream servers. This can be fixed manually by using kubelet’s --resolv-conf flag to point to the correct resolv.conf (With systemd-resolved, this is /run/systemd/resolve/resolv.conf). kubeadm 1.11 automatically detects systemd-resolved, and adjusts the kubelet flags accordingly.

5
votes

Did you expose your deployment as a service? If so, simply access it by it's dns name, like http://nginx-1234 - or if it's in a different namespace: http://nginx-1234.default.svc (change "default" to the namespace the service lives in) or http://nginx-1234.default.svc.cluster.local

Now if you did NOT expose a service, then you probably should. You don't need to expose it to the outside world, simply don't define a service type and it will only be available inside your cluster.

If for some reason you don't want to expose a service (can't think of any reason), you can query the api server for the pod IP. You will need to provide a token for authentication, but these are available inside the pod:

get the token:

TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

call the api server:

curl https://kubernetes.default.svc/api/v1/namespaces/default/pods--silent \
     --header "Authorization: Bearer $TOKEN" --insecure

you can refine your query by adding ?fieldSelector=spec.nodeName%3Dtargetnodename or similar (simply use a json path). the output can be parsed with https://stedolan.github.io/jq/ or any other JSON utility.

0
votes

Simlar question was answered here: Kubernetes - How to acces to service from a web server in pod with a rest request

Just replace "ProductWebApp" with "nginx" and "DashboardWebApp" with "service".

0
votes

I faced a similar issue, the following link might solve your issue. Generally, all of the services are visible and accessible within the cluster. Expose your service-23456 service to type CLusterID and to port 8080. Then you can call endpoint 'http://service-23456:8080' from nginx service.

Unable to communicate between 2 node,js apps in Istio enabled GKE cluster

0
votes

I also faced the similar issue we access the service deployed on one pod via another pod in a kubernetes cluster:

Tries some solutions from this page but they are not working when security applies over the pod.

This is the solution when you try to reach the secured pod:

http://service-name.namespace.svc.cluster.local:port-number

This usually works on reaching from one pod to another, but this fails when there is security applies on the pod which you try to reach.

Here I'm stuck on the same, so you can create a service account in the pod which you try to reach:

service-account.yaml

apiVersion: v1

kind: ServiceAccount

metadata:

name: {{ template "kafka-schema-registry.fullname" . }}
An write a auth-policy to allow that service account :

auth-policy.yaml

{{- if .Values.auth.enabled -}}

apiVersion: security.istio.io/v1beta1

kind: AuthorizationPolicy

metadata:

name: {{ template "pod-name.fullname" . }}

spec:

selector:

matchLabels:

  app: {{ template "*pod-name*.name" . }}
action: ALLOW

rules:

from:

source:

principals: ["cluster.local/ns/name-space/sa/pod-name"]

to:

operation:

methods: ["GET", "POST", "PUT"]

After all the above changes done on above pod which you try to reach from another pods.

the other pods just needs to provide the service account name in the deployment.yaml

example as below :

deployment.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: {{ .Values.name }}

namespace: {{ .Values.namespace }}

labels:

app: {{ .Values.name }}
spec:

replicas: {{ .Values.replicaCount }}

selector:

matchLabels:

  app: {{ .Values.name }}
template:

metadata:

  annotations:

    prometheus.io/scrape: "true"

    prometheus.io/path: "/actuator/prometheus"

    prometheus.io/port: {{ .Values.service.port | quote }}

  labels:

    app: {{ .Values.name }}

spec:

  serviceAccountName: {{ *pod-name* }}