2
votes

I have a simple logstash grok filter:

filter {
  grok {
    match => { "message" => "^%{TIMESTAMP_ISO8601:timestamp} %{NOTSPACE:name} %{WORD:level} %{SPACE} %{GREEDYDATA:message}$" }
    overwrite => [ "message" ]
  }
}

This works, it parses my logs, but according to Kibana, the timestamp values are output with data type string.

The logstash @timestamp field has data type date.

The grok documentation says you can specify a data type conversion, but only int and float are supported:

If you wish to convert a semantic’s data type, for example change a string to an integer then suffix it with the target data type. For example %{NUMBER:num:int} which converts the num semantic from a string to an integer. Currently the only supported conversions are int and float.

That suggests that I'm supposed to leave it as a string, however, if the index supports datetime values, why would you not want it properly stored and sortable as a datetime?

1

1 Answers

7
votes

You can, but you need another filter to convert it to date.

filter {
  date {
    match => [ "timestamp", ISO8601 ]
  }
}

The usual usage is to set the @timestamp field this way. But if you want to do it to another field and leave @timestamp alone:

filter {
  date {
    match  => [ "timestamp", ISO8601 ]
    target => "target_timestamp"
  }
}

Which will give you a field named target_timestamp that will have the ElasticSearch data-type you're looking for.