16
votes

I am working on a bitbucket pipeline for pushing image to gc container registry. I have created a service account with Storage Admin role. ([email protected])

enter image description here

gcloud auth activate-service-account --key-file key.json
gcloud config set project mgcp-xxxx
gcloud auth configure-docker --quiet
docker push eu.gcr.io/mgcp-xxxx/image-name

Although that the login is successful, i get: Token exchange failed for project 'mgcp-xxxx'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control

Can anyone advice on what i am missing?

Thanks!

15
Can you show us the output of the docker daemon log? stackoverflow.com/a/30970134/1663462Chris Stryczynski
gcloud auth activate-service-account --key-file key.json thank you!Steven Kaspar

15 Answers

15
votes

For anyone reading all the way here. The other suggestions here did not help me, however I found that the Cloud Service Build Account role was also required. Then the storage.buckets.get dissappears.

This is my minimal role (2) setup to push docker images: auomationroles

The Cloud Service Build Account role however adds many more permissions that simply storage.buckets.get. The exact permissions can be found here.

note: I am well aware the Cloud Service Build Account role also adds the storage.objects.get permission. However, adding roles/storage.objectViewerdid not resolve my problem. Regardless of the fact it had the storage.objects.get permission.

If the above does not work you might have the wrong account active. This can be resolved with:

gcloud auth activate-service-account --key-file key.json

If that does not work you might need to set the docker credential helpers with:

gcloud auth configure-docker --project <project_name>

On one final note. There seemed to be some delay between setting a role and it working via the gcloud tool. This was however minimal, think of a scope less than a minute.

Cheers

13
votes

In the past I had another service account with same name and different permissions. After discovering that service account names are cached, I created a new service account with different name and it's pushing properly.

12
votes

You need to be logged into your account and set the project to the project you'd like. There is a good chance you're just not logged in.

gcloud auth login

gcloud config set project <PROJECT_ID_HERE>

4
votes

for anyone else coming across this, my issue was that I had not granted my service account Storage legacy bucket reader. I'd only granted it Object viewer. Adding that legacy permission fixed it.

It seems docker is still using a legacy method to access GCR

4
votes

These are step-by step commands which got me to push first container to a GCE private repo:

export PROJECT=pacific-shelter-218
export KEY_NAME=key-name1
export KEY_DISPLAY_NAME='My Key Name'

sudo gcloud iam service-accounts create ${KEY_NAME} --display-name ${KEY_DISPLAY_NAME}
sudo gcloud iam service-accounts list
sudo gcloud iam service-accounts keys create --iam-account ${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com key.json
sudo gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com --role roles/storage.admin
sudo docker login -u _json_key -p "$(cat key.json)" https://gcr.io
sudo docker push  gcr.io/pacific-shelter-218/mypcontainer:v2
2
votes

Here in the future, I've discovered that I no longer have any Legacy options. In this case I was forced to grant full Storage Admin. I'll open a ticket with Google about this, that's a bit extreme to allow me to push an image. This might help someone else from the future.

1
votes

Tried several things, but it seems you have to run gcloud auth configure-docker

1
votes

add service account role

on google cloud IAM

Editor Storage object Admin Storage object Viewer

fix for me

1
votes

I think the discrepancy is that https://cloud.google.com/container-registry/docs/access-control says, during the #permissions_and_roles section that you need the Storage Admin role in order to push images. However, in the next section that explains how to configure access, it says to add Storage Object Admin to enable push access for the account you're wishing to configure. Switching to Storage Admin should fix the issue.

0
votes

GCR just uses GCS to store images check the permissions on your artifacts. folder in GCS within the same project.

0
votes

I had a hard time figuring this out.

Although the error message was the same, my issue was that i was using the project name and not the project ID in the Image URL.

0
votes

I created a separate service account to handle GCR IO. Added Artifact Registry Administrator role (I need to push and pull images) and it started to push the images again to GCR

0
votes

docker push command will return this permission error if docker is not authenticated with grc.io

Follow below steps.

  1. Create a service account (or use an existing one) and grant following privileges

    • Storage Admin
    • Storage Object Admin
  2. Generate a service account key (JSON) and download it

  3. Run docker-credential-gcr configure-docker

  4. Docker login with service account

    docker login -u _json_key -p "$(cat [SERVICE_ACCOUNT_KEY.json])" https://gcr.io

  5. Try to push your docker image to gcr

    docker push gcr.io/<project_id>/<image>:<tag>

0
votes

Pushing images requires object read and write permissions as well as the storage.buckets.get permission. The Storage Object Admin role does not include the storage.buckets.get permission, but the Storage Legacy Bucket Writer role does. You can find this under a note https://cloud.google.com/container-registry/docs/access-control

So Adding the Storage Legacy Bucket Writer Role fixed for me. As the Storage Object Admin role doesnt have required storage.buckets.get Permission.

0
votes

What worked for me was going to google cloud console -> I AM & Admin -> Setting storage admin as one of the roles for the service account .