1
votes

I'm running an API in cloud run and I want to know if it's possible to mount a DB file from cloud storage to my cloud run container so that I can use that DB file for querying.
For example: I have geolite database file in my cloud storage and I want to mount it everytime a new container is launched by the cloud run so I can query the location.
(This is just an example but storage file can be different like sqlite and bigger & it could be getting updated constantly by other service).

Thanks!

1
I think you should just use a Cloud Storage SDK to download any files you need at the time you need it. I strongly doubt that you'll be able to mount a Cloud Storage bucket as part of the local filesystem. - Doug Stevenson
I though about the downloading but during that time it will not serve the APIs and that could be the issue. It would be like cold start issue. I though if there is any feature in cloud run like we have in AWS Lambda layer which can be directly mounted. - Shashank Sachan
I don't understand the issue. What do you mean by "it will not serve the APIs"? You're going to have to pay the time cost of the download regardless. Even if you were able to "mount" the bucket, it would still take time to get the file. - Doug Stevenson
Also does downloading the file from storage to cloud run is counted as egress or ingress. Just thinking if somehow I make it work by downloading the file then how much it's gonna cost. I'm new to GCP so not clear about few things. As far as I know, ingress is free. - Shashank Sachan
what's the size of the db you want to use? - Pentium10

1 Answers

3
votes

At the time of the posting, there are no ways to mount a storage file in the way you requested. This is a high voted feature request.

What you can do today is:

Split the database into segments, eg: Europe or based on IP make it a separate DB file. Leverage Cloud Run idle/minimum instances features.

Build a container and deploy a service for each piece as a standalone API.
So in Cloud Run you will have a service for:

  • EU geolite service
  • APAC geolite service
  • etc..

The end result is, that you don't download from Storage, instead, you have part of your container. Separating into services also helps you to scale and choose appropriate sizes. Thus if a service gets constant traffic it's always hot.

Also Cloud Run has the concept of the idle instance. You could have leverage that as well, to download from Storage and keep part of your file.

Please understand that the container and your project size (with your DB file part of this) counts towards the memory limit of your Cloud Run service.