2
votes

If you run taint command on Kubernetes master:

kubectl taint nodes --all node-role.kubernetes.io/master-

it allows you to schedule pods. So it acts as node and master.

I have tried to run 3 server cluster where all nodes have both roles. I didn't notice any issues from the first look.

Do you think nowadays this solution can be used to run small cluster for production service? If not, what are the real downsides? In which situations this setup fails comparing with standard setup?

Assume that etcd is running on all three servers.

Thank you

1

1 Answers

8
votes

The standard reason to run separate master nodes and worker nodes is to keep a busy workload from interfering with the cluster proper.

Say you have three nodes as proposed. One winds up running a database; one runs a Web server; the third runs an asynchronous worker pod. Suddenly you get a bunch of traffic into your system, the Rails application is using 100% CPU, the Sidekiq worker is cranking away at 100% CPU, the MySQL database is trying to handle some complicated joins and is both high CPU and also is using all of the available disk bandwidth. You run kubectl get pods: which node is actually able to service these requests? If your application triggers the Linux out-of-memory killer, can you guarantee that it won't kill etcd or kubelet, both of which are critical to the cluster working?

If this is running in a cloud environment, you can often get away with smaller (cheaper) nodes to be the masters. (Kubernetes on its own doesn't need a huge amount of processing power, but it does need it to be reliably available.)