8
votes

I am trying to access Kubernetes cluster deployed Spring Boot microservices and trying to test the REST API. I configured the node port method in my deployment scripts. But when I am trying to access using Postman tool, I am only getting the response that "Could not get any response".

I configured the service.yaml script like the following structure,

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  ports:
    - port: 7100
      targetPort: 7100
      protocol: TCP
      name: http
      nodePort: 31007
 selector:
      app: my-deployment

My deployment.yaml like the following ,

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
  labels:
    app: my-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-deployment
  template:
    metadata:
      labels:
        app: my-deployment
      annotations: 
        date: "+%H:%M:%S %d/%m/%y"
    spec:
      imagePullSecrets:
        - name: "regcred"
      containers:
       - name: my-deployment-container
         image: spacestudymilletech010/spacestudysecurityauthcontrol:latest
         imagePullPolicy: Always
         ports:
            - name: http
              containerPort: 8065
              protocol: TCP
      tolerations:
      - key: "dedicated-app"
        operator: "Equal"
        value: "my-dedi-app-a"
        effect: "NoSchedule"

When I am taking kubectl describe service, output is like the following,

enter image description here

And I am trying to access my deployed api Like the following way,

  http://<my-cluster-Worker-NodeIP-Address:31007/<my-deployed-ReST-API-end-point>

Updates

When I am running the kubectl describe pod command for my deployment I am getting the response like the following,

docker@MILDEVKUB010:~$ kubectl describe pod spacestudycontrolalerts- 
deployment-8644449c58-x4zd6
Name:           spacestudycontrolalerts-deployment-8644449c58-x4zd6
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=spacestudycontrolalerts-deployment
            pod-template-hash=8644449c58
Annotations:    date: +%H:%M:%S %d/%m/%y
Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/spacestudycontrolalerts-deployment-8644449c58
Containers:
  spacestudycontrolalerts-deployment-container:
    Image:        spacestudymilletech010/spacestudycontrolalerts:latest
    Port:         7102/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-6s55b (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  default-token-6s55b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-6s55b
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
             node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

I am getting the event message from describe pod command like 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate. as shown above.

When I am running kubectl get nodes command , I am getting like the following,

NAME           STATUS   ROLES    AGE   VERSION
mildevkub020   Ready    master   5d    v1.17.0
mildevkub040   Ready    master   5d    v1.17.0

Where have I gone wrong for service access?

3
Do you have a typo in the CURL command http://<my-cluster-Worker-NodeIP-Address:31007:/<my-deployed-ReST-API-end-point>? Note the : after the port 31007prometherion
@prometherion - I added wrongly in the question. There is no : while calling from POSTMAN.Jacob
Check your nodes status, they are all ready? kubectl get nodesMark Watney

3 Answers

12
votes

If there is an event message i.e 0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate. This means there is a Taint to your nodes.

Step 1:- To verify there is a Taint kubectl describe node | grep -i taint

Step 2:- Remove the Taint, verify it has been removed.

Note that the key is used with a minus sign appended to the end.

kubectl taint nodes --all node-role.kubernetes.io/master-

kubectl taint nodes --all node-role.kubernetes.io/not-ready-

kubectl taint nodes --all node-role.kubernetes.io/unreachable-

Step 3:- Then as per your deployment.yaml file, we need to create the Taint.

kubectl taint nodes node1 dedicated-app:my-dedi-app-a:NoSchedule

Step 4:- To verify there is a Taint kubectl describe node | grep -i taint

Step 5:- Deploy your .yaml file kubectl apply -f deployment.yaml

You specify toleration for a pod in the PodSpec. Both of the following tolerations “match” the taint created by the kubectl taint line above, and thus a pod with either toleration would be able to schedule onto node1

https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/

Also, your describe pod shows that your deployment name is spacestudycontrolalerts-deployment. which is making us a confusion with your deployment.yaml file i.e metadata.Name: my-deployment. Make sure you describe pod with respective deployment name.

I hope this will help everyone for future reference on Taints and Tolerations.

3
votes

The snapshot shows no Endpoints. That means there are no Pods running behind the service or the selector

selector:
      app: my-deployment

...doesn't match such label in any running Pods.

0
votes

Firstly the pods failed to schedule due to toleration defined on deployment.yaml is not matching the taints applied on nodes available.

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate.

secondly from Logs in problem statement, selector defined on service.yaml do not match the label on described pods and will be issue for endpoints mapping to the service.

selector field on service.yaml

selector:
      app: my-deployment

pods labels from describe command

docker@MILDEVKUB010:~$ kubectl describe pod spacestudycontrolalerts- 
deployment-8644449c58-x4zd6


Labels:         app=spacestudycontrolalerts-deployment
            pod-template-hash=8644449c58