0
votes

We have a strange issue with Stackdriver alerts in our project. We have set up an alert that triggers if a log metric falls below a threshold, and just recently it has triggered an alert despite the logs being normal. The graph in the alert page shows that there are 2 metrics being measured (there should only be 1), and that one of them falls down to 0, and then a new one with the same name 'takes over'. It seems that the alert is triggered for the 1st one, but since the 2nd one proceeds as normal, the alert for the 1st one never resolves.

enter image description here

The screenshots show this 'transition' in the graph. At 8:57pm, the 1st graph is at 0 and the 2nd graph at 0.53. Then at 9:03pm the 1st graph has risen to 0.42 and the 2nd graph has fallen to 0 and has remained at 0 for the past few hours, which triggered the alert. How do I resolve this alert?

1
Please I would like to take a further look to this issue. I work for Google Cloud Support. Could you please open a private issue, an add your project number: issuetracker.google.com/issues/new?component=187164 Once we found the solution I will poste it so anyone can have the answer.Pol Arroyo
@PolArroyo I have submitted an issue via your link. Thanks!john2x

1 Answers

1
votes

There has been an issue on our side that created to 2 metrics and one of them was triggering the alert.

This issue could have affected more customers, but we did not provide any specific solution to this post as it has been a transient issue and it has been fixed to all customers.