19
votes

Using the Container Optimized OS (COS) on Google Cloud Compute, what's the best way to access the credentials of the default service account for the VM-project from within a Docker container?

$ gcloud compute instances create test-instance \
  --image=cos-stable --image-project=cos-cloud

$ ssh (ip of the above)
# gcloud ...
Command not found

# docker run -ti google/cloud-sdk:alpine /bin/sh
# gcloud auth activate-service-account
... --key-file: Must be specified.

If the credentials were on the VM, then Docker could just mount those. Ordinarily credentials would be in .config/gcloud/, and do this with docker run -v ~/.config/gcloud:~/.config/gcloud image. It is not apparent if and where such credentials are available in Container OS, particularly since it is missing gcloud.

Failing the credentials being on the VM and mountable, options would seem to include:

  1. Put the credentials in the container metadata / environment variable;
  2. Create a .json credentials file for the service account, then
    1. upload it to the VM, then mount it; or
    2. add the .json to the container;
  3. Run a Docker container (e.g. cloud-sdk-docker) that obtains the credentials and shares them with the hosts via e.g. a shared mount partition. Ideally this would be with gcloud auth activate-service-account

Is there a canonical or best-practices way to provide a Docker container with the service account credentials of the VM's project?

Google Cloud already has a security-policy model, the desired one: a VM inside a project should have the access provided by the service account. To avoid complexity and the possibility of misconfiguration or mishap, the correct solution would employ this existing security model i.e. not involving creating, downloading, distributing, and maintaining credential files.

It feels like this would be a routine problem that would need to be solved with COS, Docker, and Kubernetes, so I assume I've missed something straightforward — but the solution was not apparent to me from the docs.

EDIT — Noting the set-service-account API — this question could be reduced to "How do you use the set-service-account API with Container OS?" If it's not possible (because Container OS lacks gcloud and gsutil), I think this should be noted so VM users can plan accordingly.

EDIT For the next folks to cross this:

To replicate the issue, I used:

[local] $ ssh test-instance-ip
[test-instance] $ docker run -it gcr.io/google-appengine/python /bin/bash
[test-instance] $ pip install --upgrade google-cloud-datastore
[test-instance] $ python

>>> from google.cloud import datastore
>>> datastore_client = datastore.Client()
>>> q = datastore.query.Query(datastore_client, kind='MODEL-KIND')
>>> list(q.fetch())
[... results]

The issue was indeed scopes set in the API for the VM instance, and in particular the datastore API was disabled for the default account (Under the heading Cloud API access scopes for the VM). One can find the scopes and the necessary datastore line as follows:

> gcloud compute instances describe test-instance
...
serviceAccounts:
- email: *****[email protected]
  scopes:
  - https://www.googleapis.com/auth/datastore
  ...
...

Note that the service account itself had permission to the datastore (so the datastore could be accessed with a json credential key for the service key, generally). The service account permissions were limited by the scopes of the VM.

2

2 Answers

12
votes

The usual way to authenticate would be the one appearing on Google cloud SDK Docker readme.

From within the COS instance run this once:

docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login

This will store your credentials in the gcloud-config container volume.

This volume should only mounted with containers you want to have access to your credentials, which probably won't be anything that's not cloud-sdk

docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk:alpine gcloud compute instances create test-docker --project [PROJECT]  


Created [https://www.googleapis.com/compute/v1/projects/project/zones/us-east1-b/instances/test-docker].
NAME         ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP      STATUS
test-docker  us-east1-b  n1-standard-1               10.142.0.8   X.X.X.X  RUNNING

Service accounts are usually meant to use their own set of credentials which they have to get from somewhere, be a key file, and environment variable or a token:

gcloud auth activate-service-account

If you want gcloud (and other tools in the Cloud SDK) to use service account credentials to make requests, use this command to import these credentials from a file that contains a private authorization key, and activate them for use in gcloud. This command serves the same function as gcloud auth login but for using a service account rather than your Google user credentials.

Also, the best practice is to create different service accounts for different instances, not to get the key of the default service account and use it:

In general, Google recommends that each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job. In practice, this means you should configure service accounts for your instances with the following process:

1 - Create a new service account rather than using the Compute Engine default service account.
2 - Grant IAM roles to that service account for only the resources that it needs.
3 - Configure the instance to run as that service account.
4 - Grant the instance the https://www.googleapis.com/auth/cloud-platform scope.
5 - Avoid granting more access than necessary and regularly check your service account permissions to make sure they are up-to-date.

UPDATE

I'm not sure set-service-account does what you need/want. With it you can change the service account that an instance uses (the instance must be stopped though, so you can't use that to change the service account from withing the instance being changed). However you can use it normally for other instances, see:

jordim@cos ~ $ docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk:alpine gcloud compute instances set-service-account instance-1 --service-account [email protected]
Did you mean zone [us-east1-b] for instance: [instance-1] (Y/n)?  

Updated [https://www.googleapis.com/compute/v1/projects/XX/zones/us-east1-b/instances/instance-1].
2
votes

I think this issue is not completely valid in today's date. Therefore, i would like to share my 2 cents.

In case of Container optimized OS, if VM is running with default service account, then same gets auto configured inside cloud-sdk container.

user@instance-1 ~ $ docker run -it gcr.io/google.com/cloudsdktool/cloud-sdk:alpine /bin/bash
bash-5.1# gcloud config list
[component_manager]
disable_update_check = true
[core]
account = *************[email protected]
disable_usage_reporting = true
project = my-project-id
[metrics]
environment = github_docker_image

Your active configuration is: [default]
bash-5.1# gcloud compute instances list
NAME        ZONE           MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP  STATUS
instance-1  us-central1-a  e2-medium                  10.128.0.3   34.**.**.***  RUNNING

Hence, one do not need to perform gcloud auth login and one can directly execute all the gcloud commands provided the default service account has the permissions and the VM has enabled the specific apis explicitly.

However, this usecase is valid if the VM is running with no service account option selected during VM creation.