30
votes

I've just noticed that my app's storage was bumped almost to it's 5GB limits of free usage within the last few weeks. After checking it in more details, it appeared that this was caused by the "artifacts" bucket.

I saw this SO question which says that the "artifacts" bucket is related to Node 10 environment.

I indeed moved to Node 10 a month ago, but after figuring out that the logs are no longer structured in the firestore functions console, I've reverted back to Node 8 a few days later and only using Node 8 since then.

However I can see that the "artifacts" storage keeps increasing by about ~800Mb every week which worries me to say the least (please check the screenshots below)

I assume this is related to firestore functions deploys (or not?), but is this really expected? Can I cleanup this artifacts safely?

It looks very strange to me that it increased that dramatically within just a few weeks while previously I was using functions for a few years and never had any issues like that.

Appreciate any suggestions on how to safely handle storage size in this case and to keep its consumption at minimum.

I'm also using pubsub.schedule function in case it matters here.

enter image description here enter image description here enter image description here

I've also noticed that "artifact's" bandwidth spiked quite unexpectedly which I guess also has cost implications and I'd appreciate any input on possible approaches to minimize such spikes as well (about 22GB out of 22.5GB came from the "artifacts" bucket):

enter image description here

enter image description here

3

3 Answers

45
votes

Figured out a solution - it appeared there is a way to setup an auto deletion rule in google cloud console for those images that clutter the storage.

  1. go to the google cloud console, select your project -> storage -> browser https://console.cloud.google.com/storage/browser

  2. Select the "artifacts" bucket

  3. Under the "lifecycle" tab add a rule to auto delete old images (in my case I put "delete after 1 day since update" which works fine for me)

enter image description here

Storage is safe now!

NOTE: if you face any deployment issues later, like if you deploy several days in a row and if it gives you an error on deploy, just delete the whole "container" folder manually in the artifacts which should solve it and then redeploy again. (make sure not to delete the artifacts bucket itself!)

Hope the firebase team will improve that - the current behavior looks confusing as it easily leads to an unexpected bill unless you take extra steps to prevent that. But you'll never know that it will happen until it does.

3
votes

I assume this is related to firestore functions deploys (or not?), but is this really expected?

Yes, it's expected. Every time you deploy functions, Cloud Build will use a dedicated Cloud Storage space for the built docker image, and retain it until you delete it.

Can I cleanup this artifacts safely?

Yes, but then you won't be able to easily revert to a prior image. You would have to deploy again from your own source code.

1
votes

On top of the GCP's Life Cycle settings for artifacts images, you can also consider the following for further optimization and costs reduction of your Firebase Functions deployment:

  1. Clean up your functions folder, don't put unnecessary files in it, as we do not know if Google will only upload files by dependencies or by the whole functions folder. Feel free to refine this item if anyone of you can confirm this.
  2. Remove unnecessary dependencies from functions/package.json, functions/node_modules and require statements from your JS files, e.g. functions/index.js.
  3. Compact and compress your function's JS files by removing unnecessary comments, console loggings etc. You can achieve this with the help of grunt and uglify NPM packages. Again, we're not sure if the Cloud Build (or any of the Google functions deployment system) will auto-compress the function's images for us before storing them into the Container Registry or Cloud Storage (please refine this item if you have better answer).
  4. Organize your functions properly by creating relevant function groups so that you can deploy only certain group(s) of function rather than simply firebase deploy --only functions.
  5. If necessary, write codes that automatically detect and resolve environmental differences, e.g. environment variables from local emulators to production/staging, because the Firebase emulators and production environments may not be 100% consistent. If you don't do that, you may end up needing to deploy several times per day due to certain negligence -- this will spike up your deployment cost.
  6. If necessary, change your deployment plan: from daily to weekly, or even from weekly to monthly, depending on your monthly budgets, criticality, and urgency.

Lastly, I hope the community can also help to add more recommended costs reduction plans and strategies on this post in order to help some small businesses and individuals to survive better on Firebase and Google Cloud Platform as a whole. Even just some links to good articles would help. Thanks!