0
votes

I have an app, deployed via a Deployment type with a replica of 1. The Scheduler keeps moving the app to different nodes:

I0220 08:28:44.884808 1 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"production", Name:"app1-production-77c79bdc85-ddjfb", UID:"109fa057-1618-11e8-bfb0-005056946b20", APIVersion:"v1", ResourceVersion:"6017223", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned app1-production-77c79bdc85-ddjfb to node2

type is Normal and reason is Scheduled. What does "Scheduled" mean? Is there any way to find out exactly why it rescheduled the pod?

Also, if I wanted this pod to stay on a node for a long period of time - Statefulset is my friend, correct?

2
Hi @Matt, It's better to get the logs of Scheduler as there could the quite a lot reason for rescheduling. As I understand, Stateless and Statefulset have a different reason. A application which does not store any data or information that's called stateless. so It can be easily up and running .Suresh Vishnoi
Hi @SureshVishnoi, that's all there is in the logs regarding the moving/rescheduling of the app1-production app (pod). These logs are taken from the kube-scheduler pod.matt

2 Answers

0
votes

My guess would be that your kubelet is evicting the pod for some reason, making the HA design of Deployment kick in inside scheduler to recover from it. Try to find the reason for which the kubelet is evicting your Pod. StatefulSet will not help you on this at all, as it is specificly designed to retain stuff like network identity, name etc. without the need to schedule on the same physical node (which can disappear at any time in typical cloud setup).

0
votes

Alright, so if I look at all logs from the schedulers:

kubectl logs kube-scheduler-master2 -n kube-system

and then find the previous pod rescheduling event. I was able to describe that pod and in that output was the reason:

Status:         Failed
Reason:         Evicted
Message:        The node was low on resource: nodefs.

Low disk space!

I don't know how long K8s will keep that record for (it's now unavailable to me but it was around long enough to help at least :)