9
votes

I've been doing some Amazon AWS tinkering for a project that pulls in a decent amount of data. The majority of the services have been super cheap, however, log storage for Cloud Watch is dominating the bill, cloud watch log storage is $13 of the total $18 bill. I'm already deleting logs as I go.

cloud watch usage

cloud watch bill

How do I get rid of the logs from storage (removing the groups from the console doesn't seem to be doing it) or lower the cost of the logs (this post indicated it should be $0.03/GB which mine is more than that) or something else?

What strategies are people using?

3
I don't know if it helps but I put some tips for working out which log streams are generating the data as an answer at stackoverflow.com/questions/43327714/…Sam Critchley

3 Answers

15
votes

Don't Log Everything

Can you tell us how many logs/hour you are pushing?

One thing I've learned over the years is while having multi-level logging is nice (Debug, Info, Warn, Error, Fatal), it has two serious drawbacks:

  • slows down the application having to evaluate all of those levels at runtime - even if you say "only log Warn, Error and Fatal", Debug and Info are all still evaluated at runtime!
  • increases logging costs (I was using LogEntries and the move to use devops labor and hosting costs of running a cluster of LogStash + ElasticSearch just increased things more).

For the record, I've paid over $1000/mo for logging for previous projects. PCI compliancy for security audits requires 2 years of logs, and we were sending 1000s of logs per second.

I even gave talks about how you should be logging everything in context:

http://go-talks.appspot.com/github.com/eduncan911/go-slides/gologit.slide#1

I have since retracted from this stance after benchmarking my applications and funcs and the overall costs of labor and log storage in production.

I now only log the minimal (errors), and use packages that negate the evaluation at runtime if the log level is not set, such as Google's Glog.

Also since moving to Go development, I have adopted the strategy of very small amounts of code (e.g. microservices and packages) and dedicated CLI utils that negates the need to have lots of Debug and Info statements in monolithic stacks - if i can just log the RPC to/from each service instead. Better yet - just monitor the event bus.

Finally, with unit tests of these small services, you can be assured of how your code is acting - as you don't need those Info and Debug statements because your tests show the good and bad input conditions. Those Info and Debug statements can go inside of your unit tests, leaving your code free of cross-cutting concerns.

All of this basically reduces your logging needs in the end.

Alternative: Filter your Logs

How are you shipping your logs?

If you are not able to exclude all of the Debug, Infos and other lines, another idea is to filter your logs before you ship them by using sed, awk or alike to pipe to another file.

When you need to debug something, that's when you change the sed/awk and send the extra log info. When done debugging, go back to filtering and only log the minimal like Exceptions and Errors.

9
votes

There are 2 components to the price you pay:
1) ingestion costs: you pay when you send/upload the logs
2) storage costs: you pay to keep the logs around.

the storage costs are very low (3cents/GB, so guessing that's not the issue - ie the increased usage is a red herring - that costs you 3 cents out of the total cloudwatch bill). you are paying for ingestion when it happens. The only real way to reduce that is to reduce the amount of logging you are doing and/or stop using cloudwatch.

https://aws.amazon.com/cloudwatch/pricing/

1
votes

It sounds like you need to modify the Log Retention Settings so that you aren't retaining as much log data.

This page lists the current pricing for CloudWatch and CloudWatch Logs. If you think you are being overcharged you need to contact AWS support.