1
votes

I am trying to build a new Docker image dynamically using a Cloud Build trigger job, however I fail to see how to safely retrieve my credentials to authenticate against GCP with a service account.

Here are the steps:

  1. Dockerfile created with steps to build a Docker image. One of the steps includes downloading a file from Google Storage (bucket) that I need to access as a GCP service account.

  2. Docker image is built by using a Cloud Build trigger that is triggered after each change in the linked repository and stored in GCR.

Step one fails because:

1.) By default, for some reason, the user running the Dockerfile in GCP is not authenticated against GCP. It is not a default Google Cloud Build account, it is an anonymous user.

2.) I can authenticate as a service account BUT

a.) I don't want to store the JSON private key unencrypted locally or in the repository. b.) If I stored it encrypted in the GCP repository, then I need to authenticate before decrypting it with KMS. But I don't have the key because it's still encrypted. So I am back to my problem. c.) If I stored it in a GCP Storage bucket, I need to authenticate, too. So I am back to my problem.

Is there any other approach how I can execute the Cloud build trigger job and stay/get a GCP service account context?

3
What does the Cloud Build step look like that needs credentials? I am assuming it is a gsutil command?Kolban
Yes, you are correct.Stan

3 Answers

3
votes

The #1 solution of @ParthMehta is the right one.

Before calling the Docker Build, add this step in your Cloud Build for downloading the file from Cloud Storage by using the permission of Cloud Build environment (the service account is the following: <PROJECT_NUMBER>@cloudbuild.gserviceaccount.com)

- name: gcr.io/cloud-builders/gsutil
  args: ['cp', 'gs://mybucket/my_file', 'my_file']

The file are copied in the current directory of Cloud Build execution /workspace. Then add the files to your container by adding a simple COPY in your Dockerfile

....
COPY ./my_file ./my_file
....

In a general way, when you are working on GCP environment, you should never have to use JSON key file.

3
votes
  1. you can let cloud build to download the file from cloud storage for you and let docker to access the directory so it can use the file. You'll need to allow cloud build service account to access your bucket.

    see: https://cloud.google.com/cloud-build/docs/securing-builds/set-service-account-permissions

OR

  1. Use gcloud auth configure-docker and then you can impersonate as service account using --impersonate-service-account with access to the bucket, so docker user has sufficient access to download the file

    see: https://cloud.google.com/sdk/gcloud/reference/auth/configure-docker

0
votes

Old question but neither answer above was satisfactory for me because I needed to pull private packages from the Artifact Registry. After a lot of trial and error I found a solution using short-lived access tokens and service account impersonation and I'm sharing the solution in case anyone else has the same issue.

Specifically I'm using Cloud Build and a Docker container to transpile my Node app before deploying it. The build process needs to pull private NPM packages from the Artifact Registry, but didn't work because it wasn't authorized.

Working Solution

  1. First create a Service Account that has access to whatever GCP service you need. In my case I created artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com and gave it access to the Artifact Registry repository as an "Artifact Registry Reader." In your case you'd give it access to that bucket.

  2. Edit the newly created Service Account and under permissions add your Cloud Builder Service Account (<PROJECT_ID>@cloudbuild.gserviceaccount.com) as a Principal and grant it the "Service Account Token Creator" role.

  3. Next, your cloudbuild.yaml file should look something like this:

steps:
  # Step 1: Generate an Access Token and save it
  #
  # Here we call `gcloud auth print-access-token` to impersonate the service account 
  # we created above and to output a short-lived access token to the default volume 
  # `/workspace/access_token`.  This is accessible in subsequent steps.
  #
  - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
    args:
      - '-c'
      - >
        gcloud auth print-access-token --impersonate-service-account
        artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com >
        /workspace/access_token
    entrypoint: sh
  # Step 2: Build our Docker container
  #
  # We build the Docker container passing the access token we generated in Step 1 as 
  # the `--build-arg` `TOKEN`.  It's then accessible within the Dockerfile using
  # `ARG TOKEN`
  #
  - name: gcr.io/cloud-builders/docker
    args:
      - '-c'
      - >
        docker build -t us-docker.pkg.dev/<PROJECT>/services/frontend:latest
        --build-arg TOKEN=$(cat /workspace/access_token) -f
        ./docker/prod/Dockerfile . &&

        docker push us-docker.pkg.dev/<PROJECT>/services/frontend
    entrypoint: sh
  1. This next step is specific to private npm packages in the Artifact Registry, but I created a partial .npmrc file (missing the :_authToken line) with the following content:
@<NAMESPACE>:registry=https://us-npm.pkg.dev/<PROJECT>/npm/
//us-npm.pkg.dev/<PROJECT>/npm/:username=oauth2accesstoken
//us-npm.pkg.dev/<PROJECT>/npm/:email=artifact-registry-reader@<PROJECT>.iam.gserviceaccount.com
//us-npm.pkg.dev/<PROJECT>/npm/:always-auth=true
  1. Finally my Dockerfile uses the minted token to update my .npmrc file, giving it access to pull private npm packages from the Artifact Registry.
ARG NODE_IMAGE=node:17.2-alpine

FROM ${NODE_IMAGE} as base

ENV APP_PORT=8080

ENV WORKDIR=/usr/src/app
ENV NODE_ENV=production

FROM base AS builder

# Create our WORKDIR
RUN mkdir -p ${WORKDIR}

# Set the current working directory
WORKDIR ${WORKDIR}

# Copy the files we need
COPY --chown=node:node package.json ./
COPY --chown=node:node ts*.json ./
COPY --chown=node:node .npmrc ./
COPY --chown=node:node src ./src

#######################
# MAGIC HAPPENS HERE
# Append our access token to the .npmrc file and the container will now be 
# authorized to download packages from the Artifact Registry
# 
# IMPORTANT! Declare the TOKEN build arg so that it's accessible
#######################

ARG TOKEN
RUN echo "//us-npm.pkg.dev/<PROJECT>/npm/:_authToken=\"$TOKEN\"" >> .npmrc

RUN npm install

RUN npm run build

EXPOSE ${APP_PORT}/tcp

CMD ["cd", "${WORKDIR}"]
ENTRYPOINT ["npm", "run", "start"]

Obviously in your case you would authenticate with the access token in a different manner with GCS, but the overall concepts should translate well to any similar situation.