1
votes

I am trying to create a Chart on Azure App Insights for all the 500 ERROR we receive for Each Cloud_RoleName with unique type. What I wanted to do is show a trend week over week and also month over month on if we are getting new TYPES of 500 Error compared to previous week's data. Basically a week over week Trend analysis for 500 Errors. I have tried the following query:-

requests
| where resultCode =="500" and timestamp > ago(1d)
| join (exceptions) on operation_Id 
| summarize by type, cloud_RoleName 

This I understand will return only 1 previous days data summarized by type of 500 error. I am unfortunately not able to form the query to get the Trend data week over week of this. Any help on this KQL?

<> after Yoni's response I found a blog where a trend charts were generated for Security Events so I went ahead and used the query in blog and created it here.. but still nt sure I am getting what I want..maybe some1 can modify this query..as all I want is from Exception and REquest table a trend of new 500 Error Types week over week per cloud_roleName https://microsoftonlineguide.blogspot.com/2018/05/detect-malicious-activity-using-azure.html?showComment=1561507971564#c5650649192825890878

let T=requests 
| where resultCode =="500" and timestamp > ago(30d) 
| join (exceptions) on operation_Id 
| summarize by type, cloud_RoleName, Date = startofday(timestamp);
T
| evaluate activity_counts_metrics(type,Date, startofday(ago(30d)), startofday(now()), 1d, type, cloud_RoleName)
| extend WeekDate = startofweek(Date)
| project WeekDate, Date, type, PotentialAnomalyCount = new_dcount, cloud_RoleName
| join kind= inner
(
T
| evaluate activity_engagement(type, Date, startofday(ago(30d)), startofday(now()),1d, 7d)
| extend WeekDate = startofweek(Date)
| project WeekDate, Date, Distribution1day = dcount_activities_inner, Distribution7days = dcount_activities_outer, Ratio = activity_ratio*100
)
on WeekDate, Date
| where PotentialAnomalyCount == 1 and Ratio < 100
| project WeekDate, Date, type, cloud_RoleName, PotentialAnomalyCount, Distribution1day, Distribution7days, Ratio
| render barchart kind=stacked
1

1 Answers

1
votes

it's a little challenging to reply without having some understanding of your data and how it's structured.

that said, here's an attempt to answer based on the verbal description in your question, using the built-in activity_counts_metrics plugin (link to doc):

datatable(day:datetime, result_code:int)
[
    datetime(2019-05-01), 500, 
    datetime(2019-05-10), 500, 
    datetime(2019-05-20), 500, 
    datetime(2019-06-01), 500, 
    datetime(2019-06-02), 500, 
    datetime(2019-06-03), 501, 
    datetime(2019-06-04), 500, 
    datetime(2019-06-05), 500, 
    datetime(2019-06-06), 500, 
    datetime(2019-06-07), 500, 
    datetime(2019-06-08), 500, 
    datetime(2019-06-09), 500, 
    datetime(2019-06-10), 500, 
    datetime(2019-06-11), 500, 
    datetime(2019-06-12), 500, 
    datetime(2019-06-13), 502, 
    datetime(2019-06-14), 500, 
]
| evaluate activity_counts_metrics(result_code, day, ago(60d), now(), 'week')
// try using 'month' too, instead of 'week'

this returns:

| day                         | count | dcount | new_dcount | aggregated_dcount |
|-----------------------------|-------|--------|------------|-------------------|
| 2019-04-28 00:00:00.0000000 | 1     | 1      | 1          | 1                 |
| 2019-05-05 00:00:00.0000000 | 1     | 1      | 1          | 1                 |
| 2019-05-19 00:00:00.0000000 | 1     | 1      | 1          | 1                 |
| 2019-05-26 00:00:00.0000000 | 1     | 1      | 1          | 1                 |
| 2019-06-02 00:00:00.0000000 | 7     | 2      | 2          | 2                 |
| 2019-06-09 00:00:00.0000000 | 6     | 2      | 2          | 2                 |

where:

TimelineColumn: The time window [week/month/etc.] start time.

count: The total records count in the time window.

dcount: The distinct ID values count in the time window.

new_dcount: The distinct ID values in the time window and compared to all previous time windows.

aggregated_dcount: The total aggregated distinct ID values from the 1st time window to the current (inclusive).

If you're interested in seeing the actual distinct codes (per week/month), something along the following lines could give you a direction:

datatable(day:datetime, result_code:int)
[
    datetime(2019-05-01), 500, 
    datetime(2019-05-10), 500, 
    datetime(2019-05-20), 500, 
    datetime(2019-06-01), 500, 
    datetime(2019-06-02), 500, 
    datetime(2019-06-03), 501, 
    datetime(2019-06-04), 500, 
    datetime(2019-06-05), 500, 
    datetime(2019-06-06), 500, 
    datetime(2019-06-07), 500, 
    datetime(2019-06-08), 500, 
    datetime(2019-06-09), 500, 
    datetime(2019-06-10), 500, 
    datetime(2019-06-11), 500, 
    datetime(2019-06-12), 500, 
    datetime(2019-06-13), 502, 
    datetime(2019-06-14), 500, 
]
| summarize distinct_codes = make_set(result_code) by startofweek(day)
| extend distinct_codes_count = array_length(distinct_codes)

this returns:

| start_of_week               | distinct_codes | distinct_codes_count |
|-----------------------------|----------------|----------------------|
| 2019-04-28 00:00:00.0000000 | 500            | 1                    |
| 2019-05-05 00:00:00.0000000 | 500            | 1                    |
| 2019-05-19 00:00:00.0000000 | 500            | 1                    |
| 2019-05-26 00:00:00.0000000 | 500            | 1                    |
| 2019-06-02 00:00:00.0000000 | 500, 501       | 2                    |
| 2019-06-09 00:00:00.0000000 | 500, 502       | 2                    |