0
votes

I am not able to use the Python Firebase Admin SDK in a docker container, specifically on Google Kubernetes Engine (GKE). The same container has no problems on Cloud Run. I believe the problem is permissions, but I've gotten pretty stuck. Any help would be appreciated!

Here is the outline of the flask app. All three routes work on Cloud Run, the first two work on GKE and the third fails.

# realtime database address 
dbAddress = 'https://[projectID].firebaseio.com/'

# initialize the firebase SDK
credentials = None # the service account should provide the credentials
firebase_admin.initialize_app(credentials,{'databaseURL': dbAddress})

@app.route('/')
def hello_world(): # works on cloud run and GKE
    print('Hello, World print statement!')
    return 'Hello, World!'

@app.route('/simplepost', methods = ['POST'])
def simple_post():# works on cloud run and GKE
    content = request.get_json()
    return {'results': content}, 201

@app.route('/firepost', methods = ['POST'])
def fire_post(): # works on cloud run. FAILS ON GKE!
    jobRef = db.reference('jobs/').push()
    return {'results': jobRef.path}, 201

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))

Repo with container (requires your own firebase project): https://github.com/crispyDyne/GKE-py-fire

Error from GKE console:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/firebase_admin/db.py", line 943, in request
    return super(_Client, self).request(method, url, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/firebase_admin/_http_client.py", line 117, in request
    resp.raise_for_status()
  File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://[projectName].firebaseio.com/jobs.json

I've tried to solve permission issue through "Workload Identity", with no luck.

When I create my GKE cluster, I set a service account for the node pool that has the "owner" role (should be overkill). Under cluster security, I select the "Enable Workload Identity" checkbox.

I then configure the Kubernetes service account using the instructions below: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_1

I deploy the workload from my container registry, and expose it using an external load balancer (port:80, target port: 8080). The first two routes work fine, but they third fails. All three work fine when deployed on Cloud Run.

Hopefully I'm doing something dumb that is an easy fix! Cheers!

1

1 Answers

3
votes

After lots of struggles I think I figured it out. Here are three ways create a cluster, deploy a container, and end up with the correct Application Default Credentials in the pod.


1. Workload Identity (basically this Workload Identity article, with some deployment details added)

This method is preferred because it allows each pod deployment in a cluster to be granted only the permissions it needs.

The googleServiceAccount used needs to have the appropriate roles assigned (see below).

Create cluster (note: no scopes or service account defined)

gcloud beta container clusters create {cluster-name} \
  --release-channel regular \
  --identity-namespace {projectID}.svc.id.goog

Then create the k8sServiceAccount, assign roles, and annotate.

gcloud container clusters get-credentials {cluster-name}

kubectl create serviceaccount --namespace default {k8sServiceAccount}

gcloud iam service-accounts add-iam-policy-binding \
  --member serviceAccount:{projectID}.svc.id.goog[default/{k8sServiceAccount}] \
  --role roles/iam.workloadIdentityUser \
  {googleServiceAccount}

kubectl annotate serviceaccount \
  --namespace default \
  {k8sServiceAccount} \
  iam.gke.io/gcp-service-account=$3

Then I create my deployment, and set the k8sServiceAccount. (Setting the service account was the part that I was missing)

kubectl create deployment {deployment-name} --image={containerImageURL}
kubectl set serviceaccount deployment {deployment-name} {k8sServiceAccount}

Then expose with a target of 8080

kubectl expose deployment {deployment-name}  --name={service-name} --type=LoadBalancer --port 80 --target-port 8080

2. Cluster Service Account

This method is not preferred, because all VMs and pods in the cluster will have permissions based on the defined service account.

Create cluster with assigned service account

gcloud beta container clusters create [cluster-name] \
 --release-channel regular \
 --service-account {googleServiceAccount}

The googleServiceAccount used needs to have the appropriate roles assigned (see below).

Then deploy and expose as above, but without setting the k8sServiceAccount


3. Scopes

This method is not preferred, because all VMs and pods in the cluster will have permisions based on the scopes defined.

Create cluster with assigned scopes (firestore only requires "cloud-platform", realtime database also requires "userinfo.email")

gcloud beta container clusters create $2 \
  --release-channel regular \
  --scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email

Then deploy and expose as above, but without setting the k8sServiceAccount


The first two methods require a Google Service Account with the appropriate roles assigned. Here are the roles I assigned to get a few Firebase products working:

  • FireStore: Cloud Datastore User (Datastore)
  • Realtime Database: Firebase Realtime Database Admin (Firebase Products)
  • Storage: Storage Object Admin (Cloud Storage)