I am using Kubernetes on Minikube.
I created a user context for a new user with role and rolebinding, usingkubectl config set-context user1-context --cluster=minikibe --namespace=default --user=user1
When I try to see the runnig pods, target machine refuses it. But, in minikube context I can see the pods although both are running on the same cluster.
>kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
docker-desktop docker-desktop docker-desktop
minikube minikube minikube
* user1-context minikibe user1
>kubectl get pods
Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.
>kubectl config use-context minikube
Switched to context "minikube".
>kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-test 1/1 Running 0 44m
Config for rolebinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
subjects:
- kind: User
name: user1
apiGroup: rbac.authorization.k8s.io
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
Role config
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
I also verified the creation of roles an rolesbinding
>kubectl get roles
NAME CREATED AT
pod-reader 2020-10-20T05:59:43Z
>kubectl get rolebindings
NAME ROLE AGE
read-pods Role/pod-reader 65m
--cluster=minikubeand you have--cluster=minikibe. Can you fix it and let me know if it works? - Jakub