I am trying to set up external dns from Eks manifest file.
I created EKS cluster and created 3 fargate profiles, default, kube-system and dev. Coredns pods are up and running.
I then installed AWS Load Balancer Controller by following this doc. https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html The load balancer controller came up in kube-system.
I then installed external-dns deployment using the following manifest file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxx:role/eks-externaldnscontrollerRole-XST756O4A65B
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: kube-system
spec:
selector:
matchLabels:
app: external-dns
strategy:
type: Recreate
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: bitnami/external-dns:0.7.1
args:
- --source=service
- --source=ingress
#- --domain-filter=xxxxxxxxxx.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
#- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-identifier
#securityContext:
# fsGroup: 65534
I used both namespace kube-system and dev for external-dns, both came up fine.
I then deployed, the application and ingress manifest files. I used both namespaces, kube-system and dev.
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: kube-nginxapp1:1.0.0
ports:
- containerPort: 80
resources:
requests:
memory: "128Mi"
cpu: "500m"
limits:
memory: "500Mi"
cpu: "1000m"
---
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-nodeport-service
labels:
app: app1-nginx
namespace: kube-system
annotations:
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
alb.ingress.kubernetes.io/healthcheck-path: /app1/index.html
spec:
type: NodePort
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
----------
# Annotations Reference: https://kubernetes-sigs.github.io/aws-alb-ingress-controller/guide/ingress/annotation/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-usermgmt-restapp-service
labels:
app: usermgmt-restapp
namespace: kube-system
annotations:
# Ingress Core Settings
alb.ingress.kubernetes.io/scheme: internet-facing
kubernetes.io/ingress.class: alb
# Health Check Settings
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
#Important Note: Need to add health check path annotations in service level if we are planning to use multiple targets in a load balancer
#alb.ingress.kubernetes.io/healthcheck-path: /usermgmt/health-status
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/success-codes: '200'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
## SSL Settings
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:0xxxxxxxxxx:certificate/0xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
#alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-1-2017-01 #Optional (Picks default if not used)
# SSL Redirect Setting
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
# External DNS - For creating a Record Set in Route53
external-dns.alpha.kubernetes.io/hostname: palb.xxxxxxx.com
# For Fargate
alb.ingress.kubernetes.io/target-type: ip
spec:
rules:
- http:
paths:
- path: /* # SSL Redirect Setting
pathType: ImplementationSpecific
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: app1-nginx-nodeport-service
port:
number: 80
All pods came up fine but it is not dynamically registering the the dns alias for thr alb. Can you please guide me to know what i am going wrong?