0
votes

I use AWS EKS as my k8s control plane, and deployed a 3-node autoscaling group as my work nodes(K8s Nodes.) This autoscaling group sits in my VPC. And I made sure security groups are open at least permissive enough for peer node and ELB to communicate.

I am trying to use nginx-ingress for routing traffic from outside k8s cluster. I use helm to deploy my nginx-ingress using a values.yaml.

My values.yaml looks like this:

serviceAccount:
  create: true
  name: nginx-ingress-sa
rbac:
  create: true

controller:
   kind: "Deployment"
   service:
      type: "LoadBalancer"
  # targetPorts:
  #   http: 80
  #   https: http
   loadBalancerSourceRanges: 
      - 1.2.3.4/32
  annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "https"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:123456789:certificate/my-cert
      service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: sg-12345678
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
      nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      nginx.ingress.kubernetes.io/enable-access-log: "true"
  config:
    log-format-escape-json: "true"
    log-format-upstream: '{"real_ip" : "$the_real_ip", "remote_user": "$remote_user", "time_iso8601": "$time_iso8601", "request": "$request", "request_method" : "$request_method", "status": "$status", "upstream_addr": $upstream_addr", "upstream_status": "$upstream_status"}'
  extraArgs:
    v: 3 # NGINX log level

My ingress yaml:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress-1
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/enable-access-log: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  rules:
  - host: "s1.testk8.dev.mydomain.net"
    http:
      paths:
      - path: /
        backend:
          serviceName: s1-service
          servicePort: 443
  - host: "s2.testk8.dev.mydomain.net"
    http:
      paths:
      - path: /
        backend:
          serviceName: s2-service
          servicePort: 443
  tls:
  - hosts:
    - "s1.testk8.dev.mydomain.net"
    - "s2.testk8.dev.mydomain.net"
    secretName: "testk8.dev.mydomain.net"

Note that this secret is a self-signed TLS cert on the domain *.mydomain.net.

The behavior right now with this setting is that if enter

https://s1.testk8.dev.mydomain.net in Chrome, it just hangs. It says waiting for s1.testk8.dev.mydomain.net on the lower left corner.

If I use:

curl -vk https://s1.testk8.dev.mydomain.net

It returns:

*   Trying x.x.x.x...
* TCP_NODELAY set
* Connected to s1.testk8.dev.mydomain.net (127.0.0.1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: CN=*.mydomain.net
*  start date: Apr 25 00:00:00 2018 GMT
*  expire date: May 25 12:00:00 2019 GMT
*  issuer: C=US; O=Amazon; OU=Server CA 1B; CN=Amazon
*  SSL certificate verify ok.
> GET / HTTP/1.1
> Host: s1.testk8.dev.steelcentral.net
> User-Agent: curl/7.54.0
> Accept: */*
> 

And it also appears to wait for server response.

I also tried to tweak the values.yaml, and when I change

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http" # instead of https as above

then I hit the https://s1.testk8.dev.mydomain.net URL, I can at least see the HTTP 400 message(plain HTTP request sent to HTTPS port) from the ingress controller pod.

If uncomment these lines in the values.yaml:

# targetPorts:
  #   http: 80
  #   https: http

I am able to reach to my backend pod(controlled by a statefulset, not listed here.), I can see access log of my backend pod having new entries.

Not sure whether my use case is weird here since I see lot of folks use nginx-ingress on AWS they terminate the TLS at ELB. But I need to let my backend pod to terminate the TLS.

I also tried the ssl-passthrough flag, didn't help. When the backend-protocal is https, my request doesn't even seem to reach the ingress controller, so talking ssl-passthrough might still be meanless.

Thank you in advance if you just read all the way through here!!

2
Please share your service definitions.samhain1138
@samhain1138 The service of my nginx-ingress or service of my backend?congbaoguier
every single piece of YAML possibly related to your question. Providing partial information doesn't help anyone provide you with a useful answersamhain1138

2 Answers

2
votes

As far as I can tell, even with the current master of nginx-ingress, it is not possible to use self-signed certificates. The template https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/template/nginx.tmpl is missing any of the needed directives like:

location / {
     proxy_pass                    https://backend.server.ip/;
     proxy_ssl_trusted_certificate /etc/nginx/sslcerts/backend.server.pem;
     proxy_ssl_verify              off;

     ... other proxy settings
}

So try to use e.g. a let's encrypt certificate.

0
votes

My guess is that your backend services are using HTTPS and the in-between traffic is being sent through HTTP. This line in your values.yaml seems odd:

targetPorts:
  http: 80
  https: http

Can you try something like this?

targetPorts:
  http: 80
  https: 443