0
votes

Don't get me wrong. I think Application Insight (AI) is amazing - especially when it comes to statistically see how much the service has been used, how much that has succeeded or failed etc.

But there are other types of logging when one for example needs to log messages, start and stop times for specific event etc. My feeling is that this type of logging does not really fit into AI?

AI for example uses sampling to trim down the amount of log data, and even if I could use customEvents and via the SDK avoid sampling for that type av data it would be a lot of work and end up with a "working against the framework" felling. I'm also not talking about a single service here but hundreds of different services where end-to-end logging and visibility is needed.

So the questing is simple - Is my feeling wrong and I should try and use AI for all type of logging? Or should should I try and for example add additional type of logging in maybe Azure Table Storage for type of logging that doesn't really come natural for AI (if so ideas on best practices here would be appreciated)?

1

1 Answers

3
votes

As always, it depends. Of course you can use AI for all your logging but it comes at a cost. AI is not cheap when you log massive amounts of data. Also, depending on your use case the max. retention period of 90 days may be to short.

Some things to consider:

  • You can avoid sampling by using a self defined TelemetryClient. From the Faq:

There are certain rare events I always want to see. How can I get them past the sampling module?

Initialize a separate instance of TelemetryClient with a new TelemetryConfiguration (not the default Active one). Use that to send your rare events.

  • Metrics by default are not sampled (source):

Application Insights does not sample metrics and sessions telemetry types. Reduction in the precision can be highly undesirable for these telemetry types.

We ended up using a hybrid scenario: we log all metrics and events to AI, because for fast analysis AI (and especially Application Insights Query Analytics) is king.

But we also store aggregated statistics to Azure Storage Tables and all events to Azure Storage Blobs (You could use Continuous Export for this, we did not). This allows us to gather long term statistics and by having all events in blobs we can use Powerbi, Azure Data Lake Analytics and other tools for analysis and visualizations.

Take a look at the two pricing plans as well. We have 5 nodes so we did go for the Enterprise pricing plan since the free data cap per node is in our benefit.