0
votes

So I am stuck while trying to setup a docker container that will run and deploy kubernetes resources. I am using a tekton-pipeline and setting up a container with appuio/oc (I guess I could you a different one too).

My commands inside the container look something like this at the moment -

setting up gcloud

curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /google-cloud-sdk.tar.gz
mkdir -p /usr/local/gcloud
tar -C /usr/local/gcloud -xvf /google-cloud-sdk.tar.gz > /dev/null
/usr/local/gcloud/google-cloud-sdk/install.sh > /dev/null
PATH=$PATH:/usr/local/gcloud/google-cloud-sdk/bin

logging into gcloud

gcloud auth activate-service-account --key-file=gcloud.json
gcloud projects list
gcloud config set project <PROJECT_NAME>
gcloud config set compute/zone us-west1-a
kubectl get pods -n tekton-pipelines

It authenticates but gives me an error on running the get pods command -

Activated service account credentials for: [[email protected]]

PROJECT_ID NAME PROJECT_NUMBER

<PROJECT_NAME> My First Project <PROJECT_NUMBER>

Updated property [core/project]. Updated property [compute/zone]. Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:tekton-pipelines:default" cannot list resource "pods" in API group "" in the namespace "tekton-pipelines"

I tried a few things with setting up roles, give permission to service accounts but none of the things worked. Any help on this will be appreciated.

1

1 Answers

0
votes

My understanding is that to access a GKE environment in a fresh VM with gcloud installed, one will have to run the command:

gcloud container clusters get-credentials

If we look here we find the documentation of the command. In the documentation it claims:

gcloud container clusters get-credentials updates a kubeconfig file with appropriate credentials and endpoint information to point kubectl at a specific cluster in Google Kubernetes Engine.