Description :
Ideally (i.e in a non kubernetes scenario where my compute engines is hosting my application ) a load balancer would distribute the load on multiple replicated version of compute engines. But in case when I am using just my compute engine as worker node and it has some pods deployed on it.
Question 1 :
What would happen if my worker node ( a google computer engine ) starts receiving a lot of traffic.
Question 2 : What would be the best(or atleast a better) way to scale my current solution so that it is able to manage more load and also that my load is efficiently distributed ?