I have a UDP service I need to expose to the internet from an AWS EKS cluster. AWS load balancers (classic or NLB) don’t do UDP, so I’d like to use a NodePort
with Route53's multi-value to get UDP round robin load balancing to my nodes.
My nodes on AWS EKS don’t have an ExternalIP
assigned to them. While the EC2 instances the nodes run on have public IPs, these haven’t been assigned to the nodes when the cluster was created.
How can I assign the EC2 public IPs to my k8s nodes?
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
x.us-west-2.compute.internal Ready <none> 7d v1.10.3 <none> Amazon Linux 2 (2017.12) LTS Release Candidate 4.14.42-61.37.amzn2.x86_64 docker://17.6.2
I'm currently testing against a HTTP service for convenience, and here's what my test service looks like:
apiVersion: v1
kind: Service
metadata:
name: backend-api
labels:
app: backend-api
spec:
selector:
app: backend-api
type: NodePort
ports:
- name: back-http
port: 81
targetPort: 8000
protocol: TCP
externalIPs:
- x.x.x.x
- x.x.x.x
- x.x.x.x
For this example, my curl
requests never hit the HTTP service running on the nodes. My hunch is that is because the nodes don't have externalIP
s.