I am not able to use the Python Firebase Admin SDK in a docker container, specifically on Google Kubernetes Engine (GKE). The same container has no problems on Cloud Run. I believe the problem is permissions, but I've gotten pretty stuck. Any help would be appreciated!
Here is the outline of the flask app. All three routes work on Cloud Run, the first two work on GKE and the third fails.
# realtime database address
dbAddress = 'https://[projectID].firebaseio.com/'
# initialize the firebase SDK
credentials = None # the service account should provide the credentials
firebase_admin.initialize_app(credentials,{'databaseURL': dbAddress})
@app.route('/')
def hello_world(): # works on cloud run and GKE
print('Hello, World print statement!')
return 'Hello, World!'
@app.route('/simplepost', methods = ['POST'])
def simple_post():# works on cloud run and GKE
content = request.get_json()
return {'results': content}, 201
@app.route('/firepost', methods = ['POST'])
def fire_post(): # works on cloud run. FAILS ON GKE!
jobRef = db.reference('jobs/').push()
return {'results': jobRef.path}, 201
if __name__ == '__main__':
app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 8080)))
Repo with container (requires your own firebase project): https://github.com/crispyDyne/GKE-py-fire
Error from GKE console:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/firebase_admin/db.py", line 943, in request
return super(_Client, self).request(method, url, **kwargs)
File "/usr/local/lib/python3.8/site-packages/firebase_admin/_http_client.py", line 117, in request
resp.raise_for_status()
File "/usr/local/lib/python3.8/site-packages/requests/models.py", line 941, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://[projectName].firebaseio.com/jobs.json
I've tried to solve permission issue through "Workload Identity", with no luck.
When I create my GKE cluster, I set a service account for the node pool that has the "owner" role (should be overkill). Under cluster security, I select the "Enable Workload Identity" checkbox.
I then configure the Kubernetes service account using the instructions below: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_1
I deploy the workload from my container registry, and expose it using an external load balancer (port:80, target port: 8080). The first two routes work fine, but they third fails. All three work fine when deployed on Cloud Run.
Hopefully I'm doing something dumb that is an easy fix! Cheers!