4
votes

I am using filebeat to push my logs to elasticsearch using logstash and the set up was working fine for me before. I am getting Failed to publish events error now.

filebeat       | 2020-06-20T06:26:03.832969730Z 2020-06-20T06:26:03.832Z    INFO    log/harvester.go:254    Harvester started for file: /logs/app-service.log
filebeat       | 2020-06-20T06:26:04.837664519Z 2020-06-20T06:26:04.837Z    ERROR   logstash/async.go:256   Failed to publish events caused by: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat       | 2020-06-20T06:26:05.970506599Z 2020-06-20T06:26:05.970Z    ERROR   pipeline/output.go:121  Failed to publish events: write tcp YY.YY.YY.YY:40912->XX.XX.XX.XX:5044: write: connection reset by peer
filebeat       | 2020-06-20T06:26:05.970749223Z 2020-06-20T06:26:05.970Z    INFO    pipeline/output.go:95   Connecting to backoff(async(tcp://xx.com:5044))
filebeat       | 2020-06-20T06:26:05.972790871Z 2020-06-20T06:26:05.972Z    INFO    pipeline/output.go:105  Connection to backoff(async(tcp://xx.com:5044)) established

Logstash pipeline

02-beats-input.conf

input {
  beats {
    port => 5044
  }
}

10-syslog-filter.conf

filter {
    json {
        source => "message"
    }
}

30-elasticsearch-output.conf

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "index-%{+YYYY.MM.dd}"
  }
}

Filebeat configuration Sharing my filebeat config at /usr/share/filebeat/filebeat.yml


filebeat.inputs:

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /logs/*

#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["xx.com:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

When I do telnet xx.xx 5044, this is the what I see in terminal

Trying X.X.X.X...
Connected to xx.xx.
Escape character is '^]'
1
i am facing the same issue. My elastic search , logstash and Kibana are running fine. But when logs are pushed to logstash from filebeat some thing is going wrong and stopping my logstash and elasticsearch instances .. Have you got any solution for your problem ??Ghost Rider

1 Answers

0
votes

I had the same problem. Here some steps, which could help you to find the core of your problem. Firstly I tested such way: filebeat (localhost) -> logstash (localhost) -> elastic -> kibana. Each service is on the same machine.

My /etc/logstash/conf.d/config.conf:

input {
  beats {
    port => 5044
    ssl => false
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

Here, I specially disabled ssl (in my case it was a main reason of the issue, even when certificates were correct, magic). After that don't forget to restart logstash and test with sudo filebeat -e command. If everything is ok, you wouldn't see 'connection reset by peer' error