6
votes

I'm running Filebeat to ship logs from a Java service which is running in a container. This container has many other services running and the same Filebeat daemon is gathering all the container's logs that are running in the host. Filebeat forwards logs to Logstash which dumps them in Elastisearch.

I'm trying to use Filebeat multiline capabilities to combine log lines from Java exceptions into one log entry using the following Filebeat configuration:

filebeat:
  prospectors:
    # container logs
    -
      paths:
        - "/log/containers/*/*.log"
      document_type: containerlog
      multiline:
        pattern: "^\t|^[[:space:]]+(at|...)|^Caused by:"
        match: after

output:
  logstash:
    hosts: ["{{getv "/logstash/host"}}:{{getv "/logstash/port"}}"]

Example of Java stacktrace that should be aggregated into one event:

This Java stacktrace is a copy from a docker log entry (after running docker logs java_service)

[2016-05-25 12:39:04,744][DEBUG][action.bulk              ] [Set] [***][3] failed to execute bulk item (index) index {[***][***][***], source[{***}}
MapperParsingException[Field name [events.created] cannot contain '.']
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:273)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parse(ObjectMapper.java:193)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:305)
    at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:218)
    at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
    at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
    at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:498)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
    at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Though, with the Filebeat configuration shown above, I'm still seeing each and every line of the stacktrace as one single event in Elasticsearch.

Any idea of what I'm doing wrong? Also note that since I need to ship logs from several files with filebeat, the multiline aggregation cannot be done in the Logstash side.

Versions

FILEBEAT_VERSION 1.1.0

1
That's not about multiline I believe. On the flow of events from filebeat to ES, something (filebeat, logstash) is trying to add a field to a mapping in ES that is forbidden: dots in field names. And this is with ES 2.x.Andrei Stefan
Did you upgrade ES recently?Andrei Stefan
@AndreiStefan, I'm not concerned about the ES error per se. I induced it. What I really want is to be able to parse that Java stacktrace (and others) in filebeat so that each Exception originates only one event in my Elasticsearch that is meant to store logging messages.gpestana
Ooh, that was just a sample event. Sorry about jumping into the ES issues.Andrei Stefan
What do you get if you use multiline: pattern: ^\[ negate: true match: after?Andrei Stefan

1 Answers

1
votes

Stumbled over this problem today as well.

This is working for me (filebeat.yml):

filebeat.prospectors:
- type: log
  multiline.pattern: "^[[:space:]]+(at|\\.{3})\\b|^Caused by:"
  multiline.negate: false
  multiline.match: after
  paths:
   - '/var/lib/docker/containers/*/*.log'
  json.message_key: log
  json.keys_under_root: true
  processors:
  - add_docker_metadata: ~
output.elasticsearch:
  hosts: ["es-client.es-cluster:9200"]

I use Filebeat 6.2.2 to send the Logs directly to Elasticsearch