10
votes

What is the best way to deploy Google service account credentials inside a custom built CentOS Docker container for running either on Google's Container Engine or their 'container-vm'? This behavior happens automatically on the google/cloud-sdk container, which runs debian and includes things I'm not using such as app-eng/java/php. Ideally I am trying to access non-public resources inside my project, e.g., Google Cloud Storage bucket objects, without loging in and authorizing every single time a large number of these containers are launched.

For example, on a base Centos container running on GCE with custom code and gcloud/gsutil installed, when you run:

docker run --rm -ti custom-container gsutil ls

You are prompted to run "gsutil config" to gain authorization, which I expect.

However, pulling down the google/cloud-sdk container onto the same GCE and executing the same command, it seems to have cleverly configured inheritance of credentials (perhaps from the host container-vm's credentials?). This seems to bypass running "gsutil config" when running the container on GCE to access private resources.

I am looking to replicate that behavior in a minimal build Centos container for mass deployment.

3
Is your question about how to easily authenticate from GCE to GCS, or about how to have a minimal gcloud SDK container, or something else? Why do you consider that it's good for development but not production? What issues are you running into when you have many of those containers? Also, consider splitting the second part of the post into a separate question. - Misha Brukman
Edited above for attempted clarification. - TimK

3 Answers

4
votes

Update: as of 15 Dec 2016, the ability to update the scopes of an existing VM is now in beta; see this SO answer for more details.


Old answer: One approach is to create the VM with appropriate scopes (e.g., Google Cloud Storage read-only or read-write) and then all processes on the VM, including containers, will have access to credentials that they can use via OAuth 2.0; see docs for Google Cloud Storage and Google Compute Engine.

Note that once a VM is created with some set of scopes, they cannot be changed later (neither added nor removed), so you have to be sure to set the right set of scopes at the time of VM instance creation.

2
votes

Followup.

I ended using the /.config & /.gce directories and a very minimal set of GCE SDK components (no JDK/PHP/etc). The wheezy-cloudtools Dockerfile proved to be the best example I could find.

0
votes

Your answer may be content in this documentation:

Container-Registry Advance Authentication

It contains authentication methods for Docker containers using

  • Standalone credential helper
  • Access token
  • JSON key file
  • gcloud credential helper (recommended)

For gcloud it states that:

Use the gcloud tool to configure authentication in Cloud Shell or any environment where the Cloud SDK is installed. Cloud Shell includes a current version of Docker.

  1. To configure authentication:

Log in to gcloud as the user that will run Docker commands. To configure authentication with user credentials, run the following command:

gcloud auth login
  1. To configure authentication with service account credentials, run the following command:
gcloud auth activate-service-account ACCOUNT --key-file=KEY-FILE

Where

  • ACCOUNT is the service account name in the format [USERNAME]@[PROJECT-ID].iam.gserviceaccount.com. You can view existing service accounts on the Service Accounts page of Cloud Console or with the command gcloud iam service-accounts list
  • KEY-FILE is the service account key file. See the Identity and Access Management (IAM) documentation for information about creating a key.
  1. Configure Docker with the following command:
gcloud auth configure-docker

Your credentials are saved in your user home directory.
Linux: $HOME/.docker/config.json
Windows: %USERPROFILE%/.docker/config.json