0
votes

I am running a splunk query for a date range. It is working fine. I want to run the same query for different date ranges. Lets say 1day, 7days and a month. Example query which running for a day:

 index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log" earliest=-2d latest=-1d 
| top limit=50 MachineIdentifier
| sort MachineIdentifier asc

Currently I am running this query for different date ranges by modifying "earliest" and "latest" values and exporting it for consolidation.

I want to prepare a single query which gives this data for 1day, 7day etc in a single report. Is it possible?

EDIT:

Figured out this query but I am not able to get percentage details like above query. How to show percentage details in the results.

index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log"  earliest=-2d@d latest=-1d@d 
|fields MachineIdentifier | eval marker="1DayData" 
| append 
[search index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log"  earliest=-3d@d latest=-1d@d 
|fields MachineIdentifier | eval marker="2DaysData"] 
| stats count(eval(marker="1DayData")) AS 1DayCount, count(eval(marker="2DaysData")) AS 2DaysCount by MachineIdentifier
2

2 Answers

2
votes

One approach is to use append.

index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log" earliest=-2d latest=-1d 
| top limit=50 MachineIdentifier
| sort MachineIdentifier asc 
| eval duration="daily"
| append 
    [search index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log" earliest=-7d latest=-1d 
    | top limit=50 MachineIdentifier
    | sort MachineIdentifier asc
    | eval duration="weekly"]
| append 
    [search index="a" env="test" MachineIdentifier source="D:\\Inetpub\\Logs\\app*.log" earliest=-30d latest=-1d 
    | top limit=50 MachineIdentifier
    | sort MachineIdentifier asc 
    | eval duration="monthly"]

This isn't the most efficient method however. You may want to look into tstats for performance reasons.

0
votes

One issue with the previous query is that Splunk fetches the data 3 times. Now, there is some caching, etc... involved, but data gets proceesed 3 times.

Here is another attempt that tries to reduce the amount of data retrieval. Try both examples and see what works best for you.

index=_internal earliest=31d@d
| eval now=now()
| eval date_range=case(
  _time>relative_time(now,"-1d@d"),"daily",
  _time>relative_time(now,"-7d@d"),"weekly",
  1==1,"monthly"
  )
| appendpipe [ | where date_range="daily" AND isnull(res_duration) | top limit=50 MachineIdentifier | eval res_duration="daily" ]
| appendpipe [ | where date_range="weekly" AND isnull(res_duration) | top limit=50 MachineIdentifier | eval res_duration="weekly" ]
| appendpipe [ | where date_range="monthly" AND isnull(res_duration) | top limit=50 MachineIdentifier | eval res_duration="monthly" ]
| where isnotnull(res_duration)
| table component count res_duration

Depending on the volume of data you are processing, you may still want to look at the tstats command.