I'm using a managed AWS EKS Kubernetes cluster. For the worker nodes I have setup a node group within the EKS cluster with 2 worker nodes
These worker nodes get a public IP assigned automatically by EKS:
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-0-129.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.0.129 1.2.3.4 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
ip-10-0-1-218.eu-central-1.compute.internal Ready <none> 6d v1.14.7-eks-1861c5 10.0.1.218 5.6.7.8 Amazon Linux 2 4.14.146-119.123.amzn2.x86_64 docker://18.6.1
For this example let's assume that the values assigned automatically by AWS are 1.2.3.4
and 5.6.7.8
.
When running a command from inside a pod running on the first node I can also see that this is the IP address with which external requests are being made:
$ curl 'https://api.ipify.org'
1.2.3.4
The issue that I'm facing now is that I would like to configure this IP address. Let's assume I have a service that I use from within the pod that I'm not in control of and that requires whitelisting via IP address.
I haven't found any way to specify a range of IP addresses to the node group (or the subnets setup for the VPC in which the cluster is located) from which AWS will pick an IP address.
Is there any other way to configure the worker nodes to use fixed IP addresses?