0
votes

On a Windows Server, I have a FileBeat process which takes all my NCSA logs (HTTP requests in NCSA format) and send them to our redis database (buffer which is used after for a ELK stack).

The first time I executed my FileBeat process, it took all my logs and send them to Redis, perfect, except the @timestamp was set on the day of the execution and not the day of the log itself (I had 6 month history). That was not looking good in Kibana as the 6 month log history came to the same time (minute).

I have one file per day and I wanted to know if it is possible in the filebeat config to extract the timestamp from the log file itself to set the @timestamp so that every line/file would be with the correct time?

The logs lines looks like this:

172.16.70.18 -  -  [03/Dec/2016:09:24:24 +0000] "GET /svc/product/v2/IDCDN8QH00?sid=mobile HTTP/1.1" 404 411 "-" "Jakarta Commons-HttpClient/3.1"
172.16.70.18 -  -  [03/Dec/2016:13:00:52 +0000] "GET /svc/asset/v2/560828?sid=mobile HTTP/1.1" 200 6670 "-" "Jakarta Commons-HttpClient/3.1"
172.16.82.232 -  -  [03/Dec/2016:15:15:55 +0000] "GET /svc/store/v1?sid=tutu&lang=en&lang=fr HTTP/1.1" 200 828 "-" "Apache-HttpClient/4.5.1 (Java/1.7.0_51)"
172.16.82.235 -  -  [02/Dec/2016:15:15:55 +0000] "GET /svc/asset/v1?sid=tutu&size=1 HTTP/1.1" 200 347 "-" "Apache-HttpClient/4.5.1 (Java/1.7.0_51)"
172.16.82.236 -  -  [02/Dec/2016:15:16:02 +0000] "GET /svc/product/v2?sid=tutu HTTP/1.1" 200 19226 "-" "Apache-HttpClient/4.5.1 (Java/1.7.0_51)"
172.16.82.237 -  -  [02/Dec/2016:15:16:14 +0000] "GET /svc/catalog/v2?sid=tutu HTTP/1.1" 200 223174 "-" "Apache-HttpClient/4.5.1 (Java/1.7.0_51)"
172.16.82.238 -  -  [02/Dec/2016:15:16:26 +0000] "GET /svc/store/v1?sid=tutu&lang=en&lang=fr HTTP/1.1" 200 3956 "-" "Apache-HttpClient/4.5.1 (Java/1.7.0_51)"
172.16.70.15 -  -  [01/Dec/2016:15:53:42 +0000] "GET /svc/product/v2/IDAB062200?sid=mobile HTTP/1.1" 200 5400 "-" "Jakarta Commons-HttpClient/3.1"
172.16.70.17 -  -  [01/Dec/2016:15:53:42 +0000] "GET /svc/product/v2/IDAB800851?sid=mobile HTTP/1.1" 200 3460 "-" "Jakarta Commons-HttpClient/3.1"
172.16.70.18 -  -  [01/Dec/2016:16:35:36 +0000] "GET /svc/product/v2/IDAB601071?sid=mobile HTTP/1.1" 404 400 "-" "Jakarta Commons-HttpClient/3.1"
172.16.70.18 -  -  [01/Dec/2016:16:35:36 +0000] "GET /svc/product/v2/IDCDN8QH00?sid=mobile HTTP/1.1" 401 400 "-" "Jakarta Commons-HttpClient/3.1"

Additionally I would like to know if I can use a processor to e.g. create a new field 'IP' with the content of first column.

I saw a similar post but it looks working only with a direct integration to ElasticSearch. My FileBeat outpout is Redis.

1
What are you using to go from Redis to Elasticsearch?A J
The chain Redis -> LogStash -> ES -> Kibana is not under my responsibility, then I cannot change anything there. I just wanted to know if FileBeat can transform the "keys" before being sent to the Redis DB. I think another alternative is to put a LogStash agent on my server, that would perhaps address my problem I think.рüффп

1 Answers

1
votes

The only parsing capability that Filebeat has is for JSON logs. So if you can change the server to write the data in JSON format you could accomplish this without touching the rest of your pipeline (Redis -> Logstash -> Elasticsearch).

Otherwise you would need to use Logstash to do the parsing before writing the data to Redis. You could run Filebeat -> Logstash -> Redis or just Logstash -> Redis.

Or if you can could modify the upstream Logstash config you could do the parsing there.