0
votes

I am planning to centralize all my logs from the OS, applications provided by someone (nginx, sshd, ...) and my own applications.

All the logs ultimately land in some file in /var/log so I plan to use Beats to forward their contents to Logstash. I would like the parsing to follow a decision tree:

  • everything which comes in (= the file contents sent by Beats) is in Syslog format ➜ extract the timestamp, process, ...
    • if process is myownapp.py ➜ the content of the message is a JSON string ➜ forward as it to Elasticsearch, but also merge the extracted fields above
    • if process is anotherofmyapps ➜ the content of the message follows a pattern ➜ grok it into fields, add the extracted fields of the first step ➜ send to ES
    • if process is anything else ➜ send all the fields (including raw message) to ES

Is such a scenario where filters are chained depending on previously extracted fields doable?

1

1 Answers

0
votes

If you create a field in your config, it's available to the rest of the config, so you can use it in a later conditional expression.