I'm configuring a highly available kubernetes cluster using GKE and terraform. Multiple teams will be running multiple deployments on the cluster and I anticipate most deployments will be in a custom namespace, mainly for isolation reasons.
One of our open questions is how to manage to manage GCP service accounts on the cluster.
I can create the cluster with a custom GCP service account, and adjust the permissions so it can pull images from GCR, log to stackdriver, etc. I think this custom service account will be used by the GKE nodes, instead of the default compute engine service account. Please correct me if I'm wrong on this front!
Each deployment needs to access a different set of GCP resources (cloud storage, data store, cloud sql, etc) and I'd like each deployment to have it's own GCP service account so we can control permissions. I'd also like running pods to have no access to the GCP service account that's available to the node running the pods.
Is that possible?
I've considered some options, but I'm not confident on the feasibility or desirability:
- A GCP Service account for a deployment could be added to the cluster as a kubernetes secret, deployments could mount it as a file, and set
GOOGLE_DEFAULT_CREDENTAILS
to point to it - Maybe access to the metadata API for the instance can be denied to pods, or can the service account returned by the metadata API be changed?
- Maybe there's a GKE (or kubernetes) native way to control the service account presented to pods?