0
votes

I have created a GKE Kubernetes cluster and two workloads deployed on that cluster, There are separate node pools for each workload. The node pool for celery workload is tainted with celery-node-pool=true. The pod's spec has the following toleration:

tolerations:
- key: "celery-node-pool"
  operator: "Exists"
  effect: "NoSchedule"    

Despite having the node taint and toleration some of the pods from celery workload are deployed to the non-tainted node. Why is this happening and am I doing something wrong? What other taints and tolerations should I add to keep the pods on specific nodes?

2

2 Answers

1
votes

Using Taints:

Taints allow a node to repel a set of pods.You have not specified the effect in the taint. It should be node-pool=true:NoSchedule. Also your other node need to repel this pod so you need to add a different taint to other nodes and not have that toleration in the pod.

Using Node Selector:

You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes.

You can label the node

kubectl label nodes kubernetes-foo-node-1.c.a-robinson.internal node-pool=true

Add node selector in the pod spec:

kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    node-pool: true

Using Node Affinity

nodeSelector provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity feature, greatly expands the types of constraints you can express.

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: node-pool
            operator: In
            values:
            - true
  containers:
  - name: with-node-affinity
    image: k8s.gcr.io/pause:2.0
0
votes

What other taints and tolerations should I add to keep the pods on specific nodes?

You should also add a node selector to pin your pods to tainted node, else pod is free to go to a non-tainted node if scheduler wants.

kubectl taint node node01 hostname=node01:NoSchedule

If i taint node01 and want my pods be placed on it with toleration need node selector as well.

nodeSelector provides a very simple way to constrain(affinity) pods to nodes with particular labels.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  tolerations:
  - key: "hostname"
    operator: "Equal"
    value: "node01"
    effect: "NoSchedule"
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    kubernetes.io/hostname: node01