I have a problem with service (DNS) discovery in kubernetes 1.14 version in ubuntu bionic.
Right now my 2 pods communicating using IP addresses. How can I enable coredns for service (DNS) discovery?
Here is the output of kubectl for service and pods from kube-system namespace:
kubectl get pods,svc --namespace=kube-system | grep dns
pod/coredns-fb8b8dccf-6plz2 1/1 Running 0 6d23h
pod/coredns-fb8b8dccf-thxh6 1/1 Running 0 6d23h
service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 6d23h
I have installed kubernetes on master node(ubuntu bionic machine) using below steps
apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubectl version
apt-mark hold kubelet kubeadm kubectl
kubeadm config images pull
swapoff -a
kubeadm init
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
sysctl net.bridge.bridge-nf-call-iptables=1
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get pods --all-namespaces
This is on worker node
Docker is already installed, so directly installing kubernetes on worker node
apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
kubectl version
apt-mark hold kubelet kubeadm kubectl
swapoff -a
Now joined worker node to master
Answer:-
I think everything was setup correctly by default, There was a misunderstanding by me that I can call a server running in one pod from another pod using the container name and port which I have specified in spec, but instead I should use service name and port.
Below is my deployment spec and service spec:-
Deployment spec:-
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: node-server1-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: node-server1
spec:
hostname: node-server1
containers:
- name: node-server1
image: bvenkatr/node-server1:1
ports:
- containerPort: 5551
Service spec:
kind: Service
apiVersion: v1
metadata:
name: node-server1-service
spec:
selector:
app: node-server1
ports:
- protocol: TCP
port: 5551