I'm running jenkins in GKE. A step of the build is using kubectl
to deploy another cluster. I have gcloud-sdk installed in the jenkins container. The step of the build in question does this:
gcloud auth activate-service-account --key-file /etc/secrets/google-service-account
gcloud config set project XXXX
gcloud config set account [email protected]
gcloud container clusters get-credentials ANOTHER_CLUSTER
However I get this error (it works as expected locally though):
kubectl get pod
error: You must be logged in to the server (the server has asked for the client to provide credentials)
Note: I noticed that with no config at all (~/.kube is empty) I'm able to use kubectl and get access to the cluster where the pod is currently running. I'm not sure how it does that, does it use /var/run/secrets/kubernetes.io/serviceaccount/ to access the cluster
EDIT: Not tested if it works yet, but adding a service account to the target cluster and using that in jenkins might work:
http://kubernetes.io/docs/admin/authentication/ (search jenkins)
get-credentials
didn't generate any kubeconfig? A service account would work, but you'd still have to push the credentials to Jenkins' kubeconfig file manually. – Antoine Cotten