28
votes

I am trying to deploy my microservices into Kubernetes cluster. My cluster having one master and one worker node. I created this cluster for my R&D of Kubernetes deployment. When I am trying to deploy I am getting the even error message like the following,

Events:
 Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
 Warning  FailedScheduling  <unknown>  default-scheduler  0/2 nodes are available: 2 node(s) had taints that the pod didn't tolerate

My attempt

When I am exploring about the error, I found some comments in forums for restarting the docker in the node etc. So after that I restarted Docker. But still the error is the same.

When I tried the command kubectl get nodes it showing like that both nodes are master and both are ready state.

NAME           STATUS   ROLES    AGE     VERSION
 mildevkub020   Ready    master   6d19h   v1.17.0
 mildevkub040   Ready    master   6d19h   v1.17.0

I did not found worker node here. I created one master (mildevkub020) and one worker node (mildev040) with one load balancer. And I followed the official documentation of Kubernetes from the following link,

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/

My question

Is this error is because of the cluster problem? Because I am not finding the cluster worker node. Only master node.

5
The answer has been given to you. stackoverflow.com/a/59491824/6064025Aamir M Meman

5 Answers

33
votes

You can run below command to remove the taint from master node and then you should be able to deploy your pod on that node

kubectl taint nodes  mildevkub020 node-role.kubernetes.io/master-
kubectl taint nodes  mildevkub040 node-role.kubernetes.io/master-

Now regarding why its showing as master node check the command you ran to join the node with kubeadm. There are separate commands for master and worker node joining.

7
votes

You can also get this "taint" type of message when your docker environment doesn't have enough resources allocated.

For example, in Docker Desktop for Mac, allocate more memory/cpu/swap in preferences, and it may solve your problem.

This can also happen if kubernetes auto scaling doesn't have enough nodes to launch a new pod, which you will may see as "Insufficient CPU" on describe.

1
votes

The same issue I had faced because my Kubernetes worker node was down(turned off)

Manjunath-MacBook-Air:manjunath$ kubectl get nodes
NAME                       STATUS     ROLES   AGE   VERSION
aks-agentpool-****-*   NotReady   agent   31d   v1.18.10

After starting the VM(Kubernetes worker instance ), the issue got resolved

Manjunath-MacBook-Air: manjunath$ kubectl get nodes
NAME                       STATUS   ROLES   AGE   VERSION
aks-agentpool-****-*   Ready    agent   31d   v1.18.1
0
votes

I found this in docs.

https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/

Keep in mind that setting up your cluster to use cloud controller manager will change your cluster behaviour in a few ways:

kubelets specifying --cloud-provider=external will add a taint node.cloudprovider.kubernetes.io/uninitialized with an effect NoSchedule during initialization. This marks the node as needing a second initialization from an external controller before it can be scheduled work. Note that in the event that cloud controller manager is not available, new nodes in the cluster will be left unschedulable. The taint is important since the scheduler may require cloud specific information about nodes such as their region or type (high cpu, gpu, high memory, spot instance, etc).

0
votes

I get this with microk8s when I reboot the machine that my "cluster" runs on. Enough of microk8s comes back online to convince me that it's "up", but pods get stuck pending with this error.

I just have to run microk8s start and whatever is stuck gets unstuck (until the next reboot).