I deployed k8s cluster in cloud (VMVare vSphere) - 3 masters and 1 worker node. Then with helm installed nginx-ingress:
helm install stable/nginx-ingress
Deployed few pods of simple http-svc
Changed nginx-controller service type from LoadBalancer to NodePort and added externalIPs (IP adressess of my master nodes), so it's look like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ing-nginx-ingress-controller NodePort 10.233.15.202 172.16.40.21,172.16.40.22,172.16.40.23 80:31045/TCP,443:31427/TCP 1d
http-svc ClusterIP 10.233.13.55 80/TCP 1d
Created certificate and secret
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=<FQDN_HERE>"
kubectl create secret tls secret --key /tmp/tls.key --cert /tmp/tls.crt
And created ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: some-ingress
namespace: default
spec:
tls:
- hosts:
- <FQDN_HERE>
secretName: secret
rules:
- host: <FQDN_HERE>
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
If i using a cloud DNAT
external_ip:8443 -> master01_ip:443 (e.g. 172.16.40.21:443)
Then i have a response:
curl --resolve <FQDN>:8443:<external_ip> https://<FQDN>:8443 -v -k
* Added <FQDN>:8443:<external_ip> to DNS cache
* Rebuilt URL to: https://<FQDN>:8443/
* Hostname <FQDN> was found in DNS cache
* Trying <external_ip>...
* TCP_NODELAY set
* Connected to <FQDN> (<external_ip>) port 8443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=<FQDN>
* start date: Feb 22 10:37:00 2018 GMT
* expire date: Feb 22 10:37:00 2019 GMT
* issuer: CN=<FQDN>
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: <FQDN>:8443
> User-Agent: curl/7.58.0
But if i using Load Balancing feature (vEdge Gateway):
-> 172.16.40.21:443
external_ip:443 -> 172.16.40.22:443
-> 172.16.40.23:443
There is a problem:
curl --resolve <FQDN>:443:<external_ip> https://<FQDN> -vvvv -k
* Added <FQDN>:443:<external_ip> to DNS cache
* Rebuilt URL to: https://<FQDN>/
* Hostname <FQDN> was found in DNS cache
* Trying <external_ip>...
* TCP_NODELAY set
* Connected to <FQDN> (<external_ip>) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to <FQDN>:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to <FQDN>:443
Tried two standalone VMs with nginx and self-signed cert - worked as expected. Cloud provider says LB is functional and problem in k8s ingress.
Thanks!