0
votes

A TLS handshake from an external client to a server inside a Kubernetes cluster fails. This is about understanding why.

I've configured an Istio ingress gateway to pass through TLS received on port 15433, and route it to the server on port 433.

The ingress gateway logs shows activity when the client attempts the TLS handshake, but not the server logs, nor the istio-proxy logs.

TLS client:

openssl s_client \ 
        -connect [redacted]-[redacted].us-west-2.elb.amazonaws.com:15443 \ 
        -servername myservice.mynamespace \ 
        -CAfile /path/to/ca.cert \ 
        -cert /path/to/cert.pem \ 
        -key /path/to/cert.key <<< "Q"

logs

CONNECTED(00000006) 
140090868934296:error:140790E5:SSL routines:ssl23_write:ssl handshake failure:s23_lib.c:177: 
--- 
no peer certificate available 
--- 
No client certificate CA names sent 
--- 
SSL handshake has read 0 bytes and written 298 bytes 
--- 
New, (NONE), Cipher is (NONE) 
Secure Renegotiation IS NOT supported 
Compression: NONE 
Expansion: NONE 
No ALPN negotiated 
SSL-Session: 
    Protocol  : TLSv1.2 
    Cipher    : 0000 
    Session-ID:  
    Session-ID-ctx:  
    Master-Key:  
    Key-Arg   : None 
    PSK identity: None 
    PSK identity hint: None 
    SRP username: None 
    Start Time: 1600987862 
    Timeout   : 300 (sec) 
    Verify return code: 0 (ok) 

Istio ingress gateway logs:

"- - -" 0 - "-" "-" 298 0 1069 - "-" "-" "-" "-" "192.168.101.136:443" outbound|443||myservice.mynamespace.svc.cluster.local 192.168.115.141:42350 192.168.115.141:15443 192.168.125.206:23298 myservice.mynamespace - 

where 192.168.101.136 is the IP of the myservice pod and 192.168.115.141 is the IP of the ingressgateway pod.

Based on the IPs, this means the client connection reached the gateway, the gateway seems to have applied the virtualservice route and logged that it was forwarding this to the pod. Seems normal, except the istio-proxy on the pod shows no activity nor the server logs (though the server doesn't log stuff happening at the transport layer).

AFAIK the server is properly configured for TLS as the following port-forwarded TLS handshake succeeds:

kubectl port-forward -n mynamespace service/myservice 4430:443 &
openssl s_client \
        -connect localhost:4430 \
        -CAfile /path/to/ca.cert \ 
        -cert /path/to/cert.pem \ 
        -key /path/to/cert.key <<< "Q"
# I get back a TLS session ID, looks good.

So this points to a problem with istio's gateway or virtualservice configuration.

Gatway:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: mygateway
  namespace: mynamespace
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 15443
        name: tls-passthrough
        protocol: TLS
      tls:
        mode: PASSTHROUGH
      hosts:
      - "*"

Virtual Service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myservice
  namespace: mynamespace
spec:
  hosts:
  - "*" 
  gateways:
  - mygateway
  tls:
    - match:
      - port: 15443
        sniHosts:
        - myservice.mynamespace
      route:
      - destination:
          host: myservice
          port:
            number: 443

Kubernetes Service:

apiVersion: v1
kind: Service
metadata:
  name: myservice
  namespace: mynamespace
  labels:
    app: myservice
spec:
  selector:
    app: myservice
  ports:
    - protocol: TCP
      port: 443
      targetPort: 443
      name: grpc-svc

UPDATE: Actually the TLS traffic from the client does reach the server pod, I've confirmed this by doing tcpdump port 443 on the server pod and seeing packets when I run the openssl s_client command. Unclear why istio-proxy on the pod didn't show this, that doesn't explain why the handshake fails.

I noticed something else. Passing -msg flag to openssl s_client, I see nothing coming back, after ">>>" there's no "<<<", yet the tcpdump shows the server pod sending packets back to the gateway.

1
I'm not sure if there is something like protocol: TLS? Could you try with TCP? If that's handshake from an external client have you tried to configure destination rule and service entry for it?Jakub
a plaintext connection (i.e. TCP without TLS) between an external client and the server works. There is no protocol: TLS for ports in Kubernetes services, I have mine set as TCP already. I need to try the TCP protocol for the virtual service, I'll try that to see if that's better than TLS Passthrough. Destination rule and service entry don't seem useful to me here, the TLS traffic does reach the pod according to tcpdump, I don't see what those would do that I don't already have.mipnw
As far as I checked here they used protocol: HTTPS in the gateway instead of protocol: TLS. The main issue here might be the PASSTHROUGH, while browsing github I came across this and this. Based on that could you try to use AUTO_PASSTHROUGH instead of PASSTHROUGH? Additionally could you try if it works with PASSTHROUGH/AUTO_PASSTHROUGH when client is inside the mesh?Jakub
Your links to 2 tickets complaining PASSTHROUGH does not work on istio are very interesting. They're closed, perhaps I should reopen. I'm aware of AUTO_PASSTHROUGH, having read istio.io/latest/docs/reference/config/networking/gateway/… and having found no tutorial that leverages that feature I'm skeptical. Particularly that it needs no virtual service, it needs the SNI to specify the service and port! I'll try AUTO if istio acknowledges that PASSTHROUGH does not work.mipnw

1 Answers

2
votes

There were 2 bugs in my configuration:

  • in the configuration of my Kubernetes service. Unfortunately the name of my TCP port 443 was grpc-svc and that breaks TLS passthrough. Renaming this port to tcp-svc resolves the problem.

  • I should not be using ingress port 15443, that seems to be reserved for something else. Opening another port 9444 on the ingressgateway, then configuring port 9444 on the gateway exactly as I was configuring port 15443 in my question (i.e. config for TLS passthrough), and then configuring the virtual service to route 9444 exactly as I was configuring the virtual service route for 15433 in my question.

Doing both of these allows openssl s_client from an external client to succeed a TLS handshake to a kubernetes service via the ingress.