1
votes

I am using HAproxy as my on-prem load balancer to my Kubernetes cluster. Here is the cfg file:

global
  chroot      /var/lib/haproxy
  pidfile     /var/run/haproxy.pid
  maxconn     40000
  user        haproxy
  group       haproxy
  daemon
  tune.ssl.default-dh-param 2048
  log stdout local0  info
defaults
mode tcp
  log global
  option                  httplog
  retries                 3
  timeout http-request    50s
  timeout queue           1m
  timeout connect         1m
  timeout client          1m
  timeout server          1m
  timeout http-keep-alive 50s
  timeout check           10s
  maxconn                 1000
frontend https_front
  mode http
  bind *:443 ssl crt /etc/haproxy/haproxy.pem ca-file /etc/haproxy/haproxy.crt verify optional
  redirect scheme https if !{ ssl_fc }
  acl sadmin path_beg /sadmin
  use_backend sadmin_server if sadmin
  default_backend sadmin_server
backend sadmin_server
  balance roundrobin
  mode http
  server node1 staging-node1:30000 check-ssl verify required ca-file /etc/haproxy/backend-ca.crt
  server node2 staging-node2:30000 check-ssl verify required ca-file /etc/haproxy/backend-ca.crt
  server node3 staging-node3:30000 check-ssl verify required ca-file /etc/haproxy/backend-ca.crt
  server node4 staging-node4:30000 check-ssl verify required ca-file /etc/haproxy/backend-ca.crt

I used the same ca.crt that is used to issue certificate to the ingress objects in Kubernetes. I had created a issuer using this ca in cert-manager.

However, now I am getting the error:

none of the servers are available to take requests.

<134>Oct 28 21:18:59 haproxy[6]: 10.119.49.97:64484 [28/Oct/2019:21:18:56.891] https_front~ sadmin_server/node1 1/0/-1/-1/3046 503 237 - - SC-- 1/1/0/0/3 0/0 "GET /sadmin/ HTTP/1.1"

With the option ssl verify none , the flow works.

Can anyone tell me in such cases which certificate to use to encrypt the connection between haproxy & nginx ingress controller?

PS: I dont use ssl pass through as I have to put the acls which cannot happend in tcp mode.

UPDATE:

kubectl describe svc nginx-ingress -n ingress
Name:                     nginx-ingress
Namespace:                ingress
Labels:                   <none>
Annotations:              <none>
Selector:                 app=nginx-ingress-lb
Type:                     NodePort
IP:                       10.xxx.xx.xxx
Port:                     http  80/TCP
TargetPort:               80/TCP
NodePort:                 http  32170/TCP
Endpoints:                10.xxx.xx.xxx:80
Port:                     http-mgmt  18080/TCP
TargetPort:               18080/TCP
NodePort:                 http-mgmt  32000/TCP
Endpoints:                10.xxx.xx.xxx:18080
Port:                     https  443/TCP
TargetPort:               443/TCP
NodePort:                 https  30000/TCP
Endpoints:                10.xxx.xx.xxx:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:  

kubectl describe deployment ngins-ingress-controller -n ingress
Name:                   nginx-ingress-controller
Namespace:              ingress
CreationTimestamp:      Mon, 09 Sep 2019 19:00:45 +0000
Labels:                 app=nginx-ingress-lb
Annotations:            deployment.kubernetes.io/revision: 2
Selector:               app=nginx-ingress-lb
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:           app=nginx-ingress-lb
  Service Account:  nginx
  Containers:
   nginx-ingress-controller:
    Image:       nginx-ingress-controller:0.9.0
    Ports:       80/TCP, 18080/TCP
    Host Ports:  0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=ingress/default-backend
      --configmap=ingress/nginx-ingress-controller-conf
      --v=2
    Liveness:   http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
    Mounts:           <none>
  Volumes:            <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-ingress-controller-5cdf7fff4c (1/1 replicas created)
Events:          <none>

There are no ingress defined in that namespace

1
After taking few tcpdumps I could see that I was getting a "Unknown CA" error. The reason being nginx ingress controller was sending the Fake certificate instead of the actual tls certificate in the ingress. Anyone know what to do for nginx to send the actual host certs?swetad90
Could you update your post with output of following commands: kubectl describe ingress nginx-ingress -n=kube-system kubectl describe service <ingress-controller-name> -n=kube-system Are you sure that ingress and secret are in the same namespace?aga

1 Answers

0
votes

The way I worked this out is to use a CA signed certificate in nginx-ingress-controller default-ssl certificate argument. Now all of ingress who do not need to use cert-manager certificates can use this CA signed certificates for tls communication.

One thing to note on ingress config is not to mention the secretName. In that way it will take the default certificate of nginx ingress.

  tls:
  - hosts:
    - myworld.com.com

You can give the root certificate now in HA Proxy which works fine.