I am using kube-aws to create the kubernetes cluster on AWS, I have the kube-aws version v0.12.3, I am getting frequent issue on worker nodes as "too many open files in the system" when I try to ssh into the worker node and the nodes becomes unresponsive and gets restarted.
Because of this The pods running on the nodes gets rescheduled frequently on different nodes and application goes down for some time.
How can I resolve this issue.
✗ kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Worker Node : node|k8s-- core@ip-10-0-214-11 ~ $ ulimit -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 251640
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 251640
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited