Using the Container Optimized OS (COS) on Google Cloud Compute, what's the best way to access the credentials of the default service account for the VM-project from within a Docker container?
$ gcloud compute instances create test-instance \
--image=cos-stable --image-project=cos-cloud
$ ssh (ip of the above)
# gcloud ...
Command not found
# docker run -ti google/cloud-sdk:alpine /bin/sh
# gcloud auth activate-service-account
... --key-file: Must be specified.
If the credentials were on the VM, then Docker could just mount those. Ordinarily credentials would be in .config/gcloud/
, and do this with docker run -v ~/.config/gcloud:~/.config/gcloud image
. It is not apparent if and where such credentials are available in Container OS, particularly since it is missing gcloud
.
Failing the credentials being on the VM and mountable, options would seem to include:
- Put the credentials in the container metadata / environment variable;
- Create a
.json
credentials file for the service account, then- upload it to the VM, then mount it; or
- add the
.json
to the container;
- Run a Docker container (e.g. cloud-sdk-docker) that obtains the credentials and shares them with the hosts via e.g. a shared mount partition. Ideally this would be with
gcloud auth activate-service-account
Is there a canonical or best-practices way to provide a Docker container with the service account credentials of the VM's project?
Google Cloud already has a security-policy model, the desired one: a VM inside a project should have the access provided by the service account. To avoid complexity and the possibility of misconfiguration or mishap, the correct solution would employ this existing security model i.e. not involving creating, downloading, distributing, and maintaining credential files.
It feels like this would be a routine problem that would need to be solved with COS, Docker, and Kubernetes, so I assume I've missed something straightforward — but the solution was not apparent to me from the docs.
EDIT — Noting the set-service-account API — this question could be reduced to "How do you use the set-service-account API with Container OS?" If it's not possible (because Container OS lacks gcloud
and gsutil
), I think this should be noted so VM users can plan accordingly.
EDIT For the next folks to cross this:
To replicate the issue, I used:
[local] $ ssh test-instance-ip
[test-instance] $ docker run -it gcr.io/google-appengine/python /bin/bash
[test-instance] $ pip install --upgrade google-cloud-datastore
[test-instance] $ python
>>> from google.cloud import datastore
>>> datastore_client = datastore.Client()
>>> q = datastore.query.Query(datastore_client, kind='MODEL-KIND')
>>> list(q.fetch())
[... results]
The issue was indeed scopes set in the API for the VM instance, and in particular the datastore
API was disabled for the default account (Under the heading Cloud API access scopes for the VM). One can find the scopes and the necessary datastore
line as follows:
> gcloud compute instances describe test-instance
...
serviceAccounts:
- email: *****[email protected]
scopes:
- https://www.googleapis.com/auth/datastore
...
...
Note that the service account itself had permission to the datastore (so the datastore could be accessed with a json credential key for the service key, generally). The service account permissions were limited by the scopes of the VM.