0
votes

Below is the log which is being generated from spring application and trying to create custom grok filters

{"@timestamp":"2021-02-19T10:27:42.275+00:00","severity":"INFO","service":"capp","pid":"19592","thread":"SmsListenerContainer-9","class":"c.o.c.backend.impl.SmsBackendServiceImpl","rest":"[SmsListener] [sendSMS] [63289e8d-13c9-4622-b1a1-548346dd9427] [synemail] [ABSENT] [synfi] [0:0:0:0:0:0:0:1] [N/A] [N/A] [End Method]"}

Output expecting after applying the filters is

id   => "63289e8d-13c9-4622-b1a1-548346dd9427"
token1   => "synemail"
1

1 Answers

0
votes

First, I'd recommend parsing the text as a json to extract the "rest" value into a field. Then, assuming that the "rest" value has always the same structure, and in particular that the id is always within the third [] block and the token always within the fourth [], this grok rule should work for you

\[%{DATA}\] \[%{DATA}\] \[%{DATA:id}\] \[%{DATA:token1}\]

Note that you can always test your grok rules in Kibana, using the Grok debugger: https://www.elastic.co/guide/en/kibana/7.11/xpack-grokdebugger.html

And if you don't want to apply grok to the json directly without preprocessing it, this is the rule:

"rest":"\[%{DATA}\] \[%{DATA}\] \[%{DATA:id}\] \[%{DATA:token1}\]

Update based on the OP comments:

Assuming that the field you're parsing is "message" and that its value is a json as a text with escaped quotes, the full configuration of the Logstash grok filter something like:

grok {
   match => { "message" => '\"rest\":\"\[%{DATA}\] \[%{DATA}\] \[%{DATA:id}\] \[%{DATA:token1}\]' }
}