2
votes

I'm running jenkins in GKE. A step of the build is using kubectl to deploy another cluster. I have gcloud-sdk installed in the jenkins container. The step of the build in question does this:

gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
gcloud config set project XXXX
gcloud config set account [email protected]
gcloud container clusters get-credentials ANOTHER_CLUSTER

However I get this error (it works as expected locally though):

kubectl get pod
error: You must be logged in to the server (the server has asked for the client to provide credentials)

Note: I noticed that with no config at all (~/.kube is empty) I'm able to use kubectl and get access to the cluster where the pod is currently running. I'm not sure how it does that, does it use /var/run/secrets/kubernetes.io/serviceaccount/ to access the cluster

EDIT: Not tested if it works yet, but adding a service account to the target cluster and using that in jenkins might work:

http://kubernetes.io/docs/admin/authentication/ (search jenkins)

1
Did you try to figure out why get-credentials didn't generate any kubeconfig? A service account would work, but you'd still have to push the credentials to Jenkins' kubeconfig file manually.Antoine Cotten
It did, however it seems that in new version of kubernetes v1.3.5 you still have to do the whole OAUTH stuff. So it's a version issue.Alex Plugaru

1 Answers

0
votes

See this answer here: kubectl oauth2 authentication with container engine fails

What you need to do before doing gcloud auth activate-service-account --key-file /etc/secrets/google-service-account is to set gcloud to the old mode of auth:

CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE=True
gcloud config set container/use_client_certificate True

I have not succeded however using the other env var: GOOGLE_APPLICATION_CREDENTIALS