I am not able to compose a geo_point in Logstash by combining latitude and longitude. I followed and instructions made by others, but it looks like that those examples are based on older versions of ELK. Since ELK 2.2 there is a major change regarding geo_point and I am not sure whether I performed all the steps in the right way. Down here I explain my setup.
The version of ELK I use is:
curl -XGET 'localhost:9200'
{
"name" : "Artie",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.2.1",
"build_hash" : "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1",
"build_timestamp" : "2016-03-09T09:38:54Z",
"build_snapshot" : false,
"lucene_version" : "5.4.1"
},
"tagline" : "You Know, for Search"
}
A use a Elasticsearch, Logstash and Kibana on Docker containers, but that should not matter.
This is how my logstash.conf looks like:
cat logstash.conf
input {
http_poller {
urls => {
myresource => "myhost/data.json"
}
request_timeout => 1
interval => 1
# Parse every line captured from data.json as a new event.
codec => "line"
}
}
filter {
if [message] !~ /\"hex\":/ {
# drop messages without "hex"
drop {}
}
# Capture "hex":72d5a1
grok {
match => { "message" => "\"hex\":\"(?<hex>[^\"]+)\"," }
}
mutate {
convert => { "hex" => "string" }
}
# Capture "lat":50.047613
if [message] =~ /\"lat\":/ {
grok {
match => { "message" => "\"lat\":(?<latitude>[^,]+),"}
}
mutate {
convert => { "latitude" => "float" }
}
}
# Capture "lon":1.702955
if [message] =~ /\"lon\":/ {
grok {
match => { "message" => "\"lon\":(?<longitude>[^,]+)," }
}
mutate {
convert => { "longitude" => "float" }
}
}
# convert latitude and longitude into location.
mutate {
rename => {
"longitude" => "[location][lon]"
"latitude" => "[location][lat]"
}
}
mutate {
remove_field => [ "message" ]
}
}
output {
elasticsearch {
hosts => [ "elasticsearchhost:9200" ]
index => "logstash-%{+YYYY.MM.dd}"
}
}
The important part is that "lon" & "lat" were captured from the "message" and that they are formatted as "location" field.
When I query elasticsearch I get this kind of records:
{
"_index": "logstash-2016.04.04",
"_type": "logs",
"_id": "AVPieJtgVkabtr-H2szZ",
"_score": null,
"_source": {
"@version": "1",
"@timestamp": "2016-04-04T18:11:07.857Z",
"hex": "3e37aa",
"location": {
"lon": 4.8246,
"lat": 52.329208
}
},
"fields": {
"@timestamp": [
1459793467857
]
},
"sort": [
1459793467857
]
}
The notation "location": { "lon": 4.8246, "lat": 52.329208 }
looks good, from what I read in the documentation. But the thing is that I can not select the field "location" as geo_point in Kibana.
According the ELK documenation I need to be sure that the "location" field is mapped to the geo_point type. It requires that doc_values are enabled in order to function. I am not sure whether I should do something, because when I look in my template it looks like the "location" field is already mapped by default: "location" : { "type" : "geo_point", "doc_values" : true }
This is how my template looks like:
# curl -XGET localhost:9200/_template/logstash?pretty
{
"logstash" : {
"order" : 0,
"template" : "logstash-*",
"settings" : {
"index" : {
"refresh_interval" : "5s"
}
},
"mappings" : {
"_default_" : {
"dynamic_templates" : [ {
"message_field" : {
"mapping" : {
"fielddata" : {
"format" : "disabled"
},
"index" : "analyzed",
"omit_norms" : true,
"type" : "string"
},
"match_mapping_type" : "string",
"match" : "message"
}
}, {
"string_fields" : {
"mapping" : {
"fielddata" : {
"format" : "disabled"
},
"index" : "analyzed",
"omit_norms" : true,
"type" : "string",
"fields" : {
"raw" : {
"ignore_above" : 256,
"index" : "not_analyzed",
"type" : "string",
"doc_values" : true
}
}
},
"match_mapping_type" : "string",
"match" : "*"
}
}, {
"float_fields" : {
"mapping" : {
"type" : "float",
"doc_values" : true
},
"match_mapping_type" : "float",
"match" : "*"
}
}, {
"double_fields" : {
"mapping" : {
"type" : "double",
"doc_values" : true
},
"match_mapping_type" : "double",
"match" : "*"
}
}, {
"byte_fields" : {
"mapping" : {
"type" : "byte",
"doc_values" : true
},
"match_mapping_type" : "byte",
"match" : "*"
}
}, {
"short_fields" : {
"mapping" : {
"type" : "short",
"doc_values" : true
},
"match_mapping_type" : "short",
"match" : "*"
}
}, {
"integer_fields" : {
"mapping" : {
"type" : "integer",
"doc_values" : true
},
"match_mapping_type" : "integer",
"match" : "*"
}
}, {
"long_fields" : {
"mapping" : {
"type" : "long",
"doc_values" : true
},
"match_mapping_type" : "long",
"match" : "*"
}
}, {
"date_fields" : {
"mapping" : {
"type" : "date",
"doc_values" : true
},
"match_mapping_type" : "date",
"match" : "*"
}
}, {
"geo_point_fields" : {
"mapping" : {
"type" : "geo_point",
"doc_values" : true
},
"match_mapping_type" : "geo_point",
"match" : "*"
}
} ],
"_all" : {
"omit_norms" : true,
"enabled" : true
},
"properties" : {
"@timestamp" : {
"type" : "date",
"doc_values" : true
},
"geoip" : {
"dynamic" : true,
"type" : "object",
"properties" : {
"ip" : {
"type" : "ip",
"doc_values" : true
},
"latitude" : {
"type" : "float",
"doc_values" : true
},
"location" : {
"type" : "geo_point",
"doc_values" : true
},
"longitude" : {
"type" : "float",
"doc_values" : true
}
}
},
"@version" : {
"index" : "not_analyzed",
"type" : "string",
"doc_values" : true
}
}
}
},
"aliases" : { }
}
}
I did not add something to this template. This is how it looks like after a fresh install of Logstash and Elastic and launching Logstash with my logstash.conf file.
My question is: What steps do I need to take in order to solve my issue?
Thanks a lot!