4
votes

I have ELK stack (Elasticsearch/Logstash/Kibana) installed and running.

I use one server as ELK server to collect and store all logs from other servers. logstash-forwarder is used to send logs to ELK. The problem is:

Logstash is receiving alot of logs (as I checked with tailf logstash.stdout) but after some period when I tailf logstash.stdout again, there are nothing (not receving logs), after restarting the Logstash daemon it begins to receive again.

1
What do the logstash logs say? Have you tried not sending so many logs to see if that is actually the cause of your problem?bradvido
@bradvido, thanks for the support, logstash logs is empty, so i tried to reduce the sending logs, then the logstash daemon stay longer time and then i got this error in the logstash.log <blink> "Error: Your application used more memory than the safety cap of 500M. Specify -J-Xmx####m to increase it (#### = cap size in MB). Specify -w for full OutOfMemoryError stack trace" </blink>Ibrahim Albarki
another information about the question that may help others to answer me. when i enter this command curl 'localhost:9200/_cat/nodes?v'i got this output host ip heap.percent ram.percent load node.role master name elasticsearch_node_master xxxxxxx 57 87 0.60 d * Death's Head II logstash_node xxxxxx 99 c - logstash-logs.-50227-226 I think the error is shown here because logstash node use 99 of heap.help please?Ibrahim Albarki

1 Answers

1
votes

when you use Logstash to filter data into files, when Logstash get EOF (end of file) occurs what you said.

If it's like I'm thinking, you can consulte this: How to return to terminal when logstash filter get eof?

In the case of using shippers, maybe works at same as files.

But if you describe better your problem/errors you can get more possible resolutions.