13
votes

When deploying a docker container image to Cloud Run, I can choose a region, which is fine. Cloud Run delegates the build to Cloud Build, which apparently creates two buckets to make this happen. The unexpected behavior is that buckets aren't created in the region of the Cloud Run deployment, and instead default to multi-region US.

How do I specify the region as "us-east1" so the cost of storage is absorbed by the "always free" tier? (Apparently US multi-region storage buckets store data in regions outside of the free tier limits, which resulted in a surprise bill - I am trying to avoid that bill.)

If it matters, I am also using Firebase in this project. I created the Firebase default storage bucket in the us-east1 region with the hopes that it might also become the default for other buckets, but this is not so. The final bucket list looks like this, where you can see the two buckets created automatically with the undesirable multi-region setting.

enter image description here

This is the shell script I'm using to build and deploy:

#!/bin/sh

project_id=$1
service_id=$2

if [ -z "$project_id" ]; then
    echo "First argument must be the Google Cloud project ID" >&2
    exit 1
fi

if [ -z "$service_id" ]; then
    echo "Second argument must be the Cloud Run app name" >&2
    exit 1
fi

echo "Deploying $service_id to $project_id"

tag="gcr.io/$project_id/$service_id"

gcloud builds submit \
    --project "$project_id" \
    --tag "$tag" \
&& \
gcloud run deploy "$service_id" \
    --project "$project_id" \
    --image "$tag" \
    --platform managed \
    --update-env-vars "GOOGLE_CLOUD_PROJECT=$project_id" \
    --region us-central1 \
    --allow-unauthenticated
2
I think this is a duplicate of stackoverflow.com/questions/51595900/…. You should still be able to email cloud-build-contact@google.com to get access to the early-access program.Dustin Ingram
Is not at all, actually the question is about in which region or zone the artifacts are being stored.Ferregina Pelona
@DustinIngram This is just about the region of the stored artifacts. I don't care where the computing resources are that handle the build, or even how they work. I'm just running gcloud commands to build and deploy. I've edited the question to be specific about that.Doug Stevenson
@FernandoRV Yes, this is just about the artifacts. I see some instructions out there about using yaml files that let you specify a container registry, but this seems like overkill, and there doesn't seem to be any simple gcloud CLI options that talk about how these buckets are managed.Doug Stevenson
Gotcha, sorry I misread!Dustin Ingram

2 Answers

7
votes

As you mention, Cloud Build creates a bucket or buckets with multi region because when creating the service in Cloud Run, there are only added the needed flags and arguments to deploy the service.

The documentation for the command gcloud builds submit mentions the following for the flag --gcs-source-staging-dir:

--gcs-source-staging-dir=GCS_SOURCE_STAGING_DIR

A directory in Google Cloud Storage to copy the source used for staging the build. If the specified bucket does not exist, Cloud Build will create one. If you don't set this field, gs://[PROJECT_ID]_cloudbuild/source is used.

As this flag is not set, the bucket is created in multi-region and in us. This behavior also applies for the flag --gcs-log-dir.

Now the necessary steps to use the bucket in the dual-region, region or multi-region you want is using a cloudbuild.yaml and using the flag --gcs-source-staging-dir. You can do the following:

  1. Create a bucket in the region, dual-region or multi-region you may want. For example I created a bucket called "example-bucket" in australia-southeast1.
  2. Create a cloudbuild.yaml file. This is necessary to store the artifacts of the build in the bucket you want as mentioned here. An example is as follows:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
    args:
    - 'run'
    - 'deploy'
    - 'cloudrunservice'
    - '--image'
    - 'gcr.io/PROJECT_ID/IMAGE'
    - '--region'
    - 'REGION_TO_DEPLOY'
    - '--platform'
    - 'managed'
    - '--allow-unauthenticated'
artifacts:
    objects:
    location: 'gs://example-bucket'
    paths: ['*']
  1. Finally you could run the following command:
gcloud builds submit --gcs-source-staging-dir="gs://example-bucket/cloudbuild-custom" --config cloudbuild.yaml

The steps mentioned before can adapted to your script. Please give a try :) and you will see that even if the Cloud Run service is deployed in Asia, Europe or US, the bucket specified before can be in another location.

3
votes

Looks like this is only possible by doing what you're mentioning in the comments:

  1. Create a storage bucket in us-east1 as the source bucket ($SOURCE_BUCKET);
  2. Create a Artifact Registry repo in us-east1;
  3. Create the following cloudbuild.yaml:
    steps:
    - name: 'gcr.io/cloud-builders/docker'
      args: ['build', '-t', 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image', '.']
    images:
    - 'us-east1-docker.pkg.dev/$PROJECT_ID/my-repo/my-image'
    
  4. Deploy with:
    $ gcloud builds submit --config cloudbuild.yaml --gcs-source-staging-dir=gs://$SOURCE_BUCKET/source
    

More details here: https://cloud.google.com/artifact-registry/docs/configure-cloud-build

I think it should at least be possible to specify the Artifact Registry repo with the --tag option and have it be automatically created, but it currently rejects any domain that isn't gcr.io outright.