0
votes

The following is a raw log:

2017-09-17 08:34:54 181409 10.110.82.122 200 TCP_TUNNELED 4440 1320 CONNECT tcp cdn.appdynamics.com 443 / - ANILADE - 10.100.134.6 - - "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36" OBSERVED "Technology/Internet" - 10.100.134.6

This is my logstash configuration file:

input {
    beats {
        port => "5044"
    }
}

filter 
#start of filter
{
grok 
#start of grok filter
{
match =>
#start of match
{"message"=>"%{TIMESTAMP_ISO8601:@timestamp} (%{NUMBER:time_taken}|\-) (%{IP:sourceIP}|\-) (%{NUMBER:status}|\-) (%{WORD:action}|\-) (%{NUMBER:scBytes}|\-) (%{NUMBER:csBytes}|\-) (%{WORD:method}|\-) (%{WORD:uri_scheme}|\-) (%{URIHOST:url}|\-) (%{NUMBER:port}|\-) (?<uri_path>([a-zA-Z0-9\/\.\?\-\_]+)|(\/)) (?<uri_query>([a-zA-Z0-9\/\.\?\-\=\&\%]+)) (?<username>([a-zA-Z0-9\/\.\?\-]+)) (?<auth_group>([a-zA-Z0-9\/\.\?\-]+)) (?<destIP>([a-zA-Z0-9\.\-]+)) (?<content_type>([a-zA-Z0-9\-\/\;\%\=]+)) (?<referer>[a-zA-Z0-9\-\/\;\%\=\:\.]+) (%{QUOTEDSTRING:user_agent}|\-) (%{WORD:filter_result}|\-) (%{QUOTEDSTRING:category}|\-) (?<vir_id>([a-zA-Z0-9\-\/.])) (%{IP:proxyIP}|\-)"
}
#end of match
}
#end of grok
date
#start of date filter
{
match=>["@timestamp","ISO8601"]
}
#end of date filter
}
#end of filter


output 
{

    elasticsearch 
    {
    hosts => ["localhost:9200"] 
    index => proxylog
    }
}

I wanted to pick the time from logs hence I used date filter as suggested in Date Plugin documentation. But regardless of the configuration, I can see that logstash elasticsearch is showing indexed time in the @timestamp field when I view it in Kibana. I can't find what am I missing or where I am wrong. I tried not to use the TIMESTAMP_ISO8601 in grok filter by replacing it with

%{YEAR:year}-%{MONTHNUM:month}-%{MONTHDAY:day} %{TIME:time}

and and adding field timestamp as:

add_field=>["timestamp","%{year}-%{month}-%{day} %{time}"]

then changing the date filter config as

date{
match=>["timestamp","YYYY-MM-DD HH:mm:ss"]
remove_field=>["timestamp","year","month","day"]}
}

and found no luck. Can someone point out a solution as any chain found on the forum of elasticsearch and stackoverflow was not helping. Can anyone tell me whether I need to change any configuration of filebeat since I am using filebeat to ship logs to logstash.

2
In your original configuration, remove the @ in {"message"=>"%{TIMESTAMP_ISO8601:@timestamp} and match=>["@timestamp","ISO8601"] and try again, please? It might come from this.baudsp
will give it a try and let you know how it goes. @baudspAkash

2 Answers

1
votes

you can add target => "new_field" in your date filter and use it to create an index in Kibana

0
votes

Probably because @timestamp is a timestamp field, but your grok{} is treating it like a string.

First, grok the string into a new field, e.g.

%{TIMESTAMP_ISO8601:[@metadata][timestamp]}

Then set @timestamp by using the date{} filter with [@metadata][timestamp] as the input.