I would like to deploy ELK stack on-premise for our custom application. So, I referred to the official docs for installation guides, installed Elasticsearch cluster and Kibana. Then comes the question: the documentation says I can process the logs from any custom app if I would like to (if built-in modules are not suitable for me), and I should just configure Filebeat so it can harvest these logs as an input. But what should be an output for Filebeat? I've heard that Elasticsearch should get processed, structured logs (for example, in JSON format) as an input; but our application produces plain text logs (as it's Java app, logs can include stack traces and other mixed data), and they should be processed and structured first... Or shouldn't they?
So, here are my questions regarding this situation:
- Do I need to set Filebeat output as Logstash input to format and structure logs, and then set Logstash output as Elasticsearch input? Or I can forward logs from Filebeat straight to Elasticsearch?
- Do I really need Filebeat in this situation, or maybe Logstash can be configured to read log files by its own?