2
votes

I have multiple logstash instances shipping logs directly to a central elasticsearch server (using output->elasticsearch).

This works fine so far, however when elasticsearch is going down (e.g. the whole server is restarted) logstash doesn't restart sending logs once elasticsearch is up again.

I have to manually restart logstash. Additionally all logs from elasticsearch going down until restart of logstash are lost.

How can i change my setup to make it more fault-tolerant ?

2
Logstash should figure it out, even with one node in your cluster.Alain Collins

2 Answers

1
votes

You should consider using a broker and send all your logs to a message queue (for example rabbitMQ), from there the Logstash will pull messages and send the data to Elasticsearch. In case Elasticsearch will be down Logstash will stop pulling the messages and they will accumulate in the broker. once the connection reestablish your messages will be written.

0
votes

Add another server and start an elasticsearch cluster. Elasticsearch is built to scale to multiple nodes, and the logstash client will join the cluster and fail-over automatically.

Learn more about the distributed nature of elasticsearch.