0
votes

I am running single-node GKE cluster with configuration of 4vCPU & 16 GB Memory. Now i am planning to add one more node pool 1 vCPU and 3.75 GB of RAM.

Right now on a single node, I am running load like Elasticsearch, Redis, Rabbitmq with stateful sets having disk attach.

I have not added any affinity & anti-affinity in Pod configuration. If i will be adding new node may possible some of pods schedule to new node. While i am only planning to run stateless pods on new pods.

Is there any way i can stop scheduling ES, Redis or RabbitMQ on new node without adding affinity or anything don't want to restart(Touch) pod or service. ES, Redis, RabbitMQ should have to stuck on the old node only.

2

2 Answers

2
votes

nodeSelector and kubectl patch could be the solution.

Using a new label

You must fist label the node running the statefull workloads with for instance the following label statefullnode=true using the following command:

kubectl label nodes <node-name> statefullnode=true

Then you must patch each deployments running on this node using kubectl patch:

kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"statefullnode": "true"}}}}}'

Using the node name

If you don't want to label your node you can simply use the node name as label for the nodeSelector. For instance if your node's name is my-gke-node then run:

kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "my-gke-node"}}}}}'

Run kubectl get nodes to get the names of your cluster’s nodes.

1
votes

What you need is Taints. "they allow a node to repel a set of pods" ( more details here: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ )

From the link above (assuming node1 is the new node) in your example:

kubectl taint nodes node1 key=value:NoSchedule - This means that no pod will be able to schedule onto node1 unless it has a matching toleration. (and your existing pods don't have a matching toleration).

And in one of the new pods you will want to schedule on the new node, you will apply this:

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"

Thus, the old pods won't be able to schedule to this new node, and only the new pods, and only if you apply the toleration to them, will be able to schedule on the new node.