I am studying about logstash and how to use its filters and grok patterns. I have one doubt that I need to clarify.
Let's say our logs contain a timestamp field like:
[01/Sep/2015:06:22:11 -0400]
Using grok, I can define a pattern to capture this as an HTTPDATE, like this:
\[%{HTTPDATE:timestamp}\]
In the grok debugger, I can see that it has been able to identify the date, time, etc from this:
{
"timestamp": [
[
"01/Sep/2015:06:22:11 -0400"
]
],
"MONTHDAY": [
[
"01"
]
],
"MONTH": [
[
"Sep"
]
],
"YEAR": [
[
"2015"
]
],
"TIME": [
[
"06:22:11"
]
],
"HOUR": [
[
"06"
]
],
"MINUTE": [
[
"22"
]
],
"SECOND": [
[
"11"
]
],
"INT": [
[
"-0400"
]
]
}
Now, I was looking at a tutorial on logstash website where they are using another date filter to store this into a date field. Like this:
date {
match => [ "timestamp", "dd/MMM/YYYY:MM:mm:ss Z"]
locale => en
}
What this is doing is storing another field with differently formatted date. My question is, why store two date fields representing the same date with just different format. Can we not use the date field from first stage the way we can use the date field from second stage ?