6
votes

Wanted to see if anyone has any advice or further reading for diagnosing the Heroku Error R14 (Memory quota exceeded) errors I'm getting from my web dynos in my django app's heroku logs.

An example log is:

heroku[web.1]: source=web.1 dyno=heroku.16810889.deec8406-c082-445d-a047-d0026849fd5e sample#load_avg_1m=0.01 sample#load_avg_5m=0.03 sample#load_avg_15m=0.04
heroku[web.1]: source=web.1 dyno=heroku.16810889.deec8406-c082-445d-a047-d0026849fd5e sample#memory_total=512.06MB sample#memory_rss=511.84MB sample#memory_cache=0.00MB sample#memory_swap=0.22MB sample#memory_pgpgin=380186624pages sample#memory_pgpgout=364599pages
heroku[web.1]: Process running mem=512M(100.0%)
heroku[web.1]: Error R14 (Memory quota exceeded)

Some background information, observations and things I've tried:

  1. Most of the memory is being consumed by memory_rss (a google search for heroku "memory_rss" doesn't turn up much)
  2. Scaling or unscaling the # of web dynos has no effect, each new web dyno soon hits 512M (100%). It always stops at 100% though and doesn't go higher. Restarting dynos only alleviates the issue for 10-15 mins.
  3. This issue is only effecting web dynos. I have one celery scheduler and one celery worker dyno running fine. celery.1 memory total is hovering right around 100MB.
  4. Here's my instance dash from New Relic:

enter image description here

We also ran the exact same code on a different Heroku instance (staging server), and memory never went above 160MB, so it seems to be server-specific (at least to some extent).

Any advice on where I should look next? What other information can I provide that would be helpful? thanks

1

1 Answers

1
votes

Slightly ridiculous but traced the issue to django-avatar the app is using for user profile avatars. Almost 50% of response time for any page in the app was spent in {% block header %} of the template, which didn't make sense, and turned out to be the { avatar } tag.

AVATAR_STORAGE_DIR wasn't properly configured for S3 in settings.py