We have two approaches to expose the Dashboard, NodePort
and in LoadBalancer
.
I'll demonstrate both cases and some of it's pros and cons.
type: NodePort
This way your dashboard will be available in https://<MasterIP>:<Port>
.
- I'll start with Dashboard is already deployed and running as ClusterIP (just like yours).
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.11.223 <none> 80/TCP 11m
$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'
service/kubernetes-dashboard patched
Note: You can also apply in YAML format changing the field type: ClusterIP
to type: Nodeport
, instead I wanted to show a direct approach with kubectl patch
using JSON format to patch the same field.
- Now let's list to see the new port:
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.11.223 <none> 443:31681/TCP 13m
Note: Before accessing from an outside cluster, you must enable the security group of the nodes to allow incoming traffic through the port exposed, or here for GKE.
Below my example creating the rule on Google Cloud, but the same concept applies to EKS.
$ gcloud compute firewall-rules create test-node-port --allow tcp:31681
Creating firewall...⠹Created [https://www.googleapis.com/compute/v1/projects/owilliam/global/firewalls/test-node-port].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
test-node-port default INGRESS 1000 tcp:31681 False
$ kubectl get nodes --output wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
gke-cluster-1-pool-1-4776b3eb-16t7 Ready <none> 18d v1.15.8-gke.3 10.128.0.13 35.238.162.157
- And I'll access it using
https://35.238.162.157:31681
:
type: LoadBalancer
This way your dashboard will be available in https://IP
.
Using LoadBalancer
your cloud provider automates the firewall rule and port forwarding assigning an IP for it. (you may be charged extra depending on your plan).
Same as before, I deleted the service and created again as clusterIP:
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.0.2.196 <none> 443/TCP 15s
$ kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec": {"type": "LoadBalancer"}}'
service/kubernetes-dashboard patched
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 <pending> 443:30870/TCP 58s
$ kubectl get service kubernetes-dashboard -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard LoadBalancer 10.0.2.196 35.232.133.138 443:30870/TCP 11m
Note: When you apply it, the EXTERNAL-IP will be in <pending>
state, after a few minutes a Public IP should be assigned as you can see above.
- You can access it using
https://35.232.133.138
:
Security Considerations:
Your connection to the Dashboard when exposed is always thru HTTPS, you may get a notification about the autogenerated certificate everytime you enter, unless you change it for a trusted one. You can find more here
Since the Dashboard is not meant to be much exposed, I'd suggest to keep the access using the Public IP (or custom dns name in case of aws, i.e: *****.us-west-2.elb.amazonaws.com).
If you really like to integrate to your main domain name, I'd suggest to put it behind another layer of authentication on your website.
New access will still need the Access Token, but no one will have to go thru that process to expose the Dashboard, you only have to pass the IP/DNS Address and the Token to access it.
This token has Cluster-Admin Access, so keep it safe as you'd keep a root password.
If you have any doubts, let me know!
k8s.mydomain.com
) && PathPrefix(/dashboard
)" – codeaprendiz