0
votes

I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?

Many thanks,

John

2
when you say "deployment machine" you meant the bastion server right?garlicFrancium
No. Concourse -> Bastion -> Cluster. Our Concourse machine has tools, such as kubectl, to run against the cluster, but needs a tunnel established through the Bastion.John Morsley

2 Answers

0
votes

In AWS

Scenario 1

By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).

if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps

Scenario 2

when you have private cluster endpoint enabled (which seems to be your case)

When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.

Either you can modify your private endpoint Steps here OR Follow these Steps

0
votes

Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.

Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:

First make so called local ssh port forwarding on Bastion (pretty well described here):

ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user@<kubernetes-cluster-ip-address-or-hostname>

Now you can access your kubernetes api from Bastion by:

curl localhost:<kube-api-server-port>

however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:

ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user@<bastion-server-ip-address-or-hostname>

From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:

curl localhost:<kube-api-server-port>

or incorporate it in your .kube/cofig.

Let me know if it helps.

You can also make such tunnel more persistent. More on that you can find here.