2
votes

We are setting up elasticsearch, kibana, logstash and filebeat on a server to analyse log files from many applications. Due to reasons* each application log file ends up in a separate directory on the ELK server. We have about 20 log files.

  1. As I understand we can run a logstash pipeline config file for each application log file. That will be one logstash instance running with 20 pipelines in parallel and each pipeline will need its own beat port. Please confirm that this is correct?
  2. Can we have one filebeat instance running or do we need one for each pipeline/logfile?
  3. Is this architecture ok or do you see any major down sides?

Thank you!

*There are different vendors responsible for different applications and they run a cross many different OS and many of them will not or can't install anything like filebeats.

1
filebeat should be at the same server that have application logs, and you can have one filebeat configured to read multiple filesElasticCode
Thanks for your response but in my case it is not possible to install filebeat on the servers running the application. Instead the files will be read from the ELK server. Could you please share a link with documentation regarding filebeats and many logs?user1329339
check my answerElasticCode

1 Answers

0
votes

We do not recommend reading log files from network volumes. Whenever possible, install Filebeat on the host machine and send the log files directly from there. Reading files from network volumes (especially on Windows) can have unexpected side effects. For example, changed file identifiers may result in Filebeat reading a log file from scratch again.

Reference

We always recommend installing Filebeat on the remote servers. Using shared folders is not supported. The typical setup is that you have a Logstash + Elasticsearch + Kibana in a central place (one or multiple servers) and Filebeat installed on the remote machines from where you are collecting data.

Reference

For one filebeat instance running you can apply different configuration settings to different files by defining multiple input sections as below example, check here for more

filebeat.inputs:

- type: log

  enabled: true
  paths:
    - 'C:\App01_Logs\log.txt'
  tags: ["App01"]
  fields: 
    app_name: App01

- type: log
  enabled: true
  paths:
    - 'C:\App02_Logs\log.txt'
  tags: ["App02"]
  fields: 
    app_name: App02

- type: log
  enabled: true
  paths:
    - 'C:\App03_Logs\log.txt'
  tags: ["App03"]
  fields: 
    app_name: App03

And you can have one logstash pipeline with if statement in filter

filter {

    if [fields][app_name] == "App01" {

      grok { }

    } else if [fields][app_name] == "App02" {

      grok { }

    } else {

      grok { }

    }
}

Condtion can be also if "App02" in [tags] or if [source]=="C:\App01_Logs\log.txt" as we send from filebeat