4
votes

I'm setting up k8s on-prem k8s cluster. For tests I use single-node cluster on vm set up with kubeadm. My requirements include running MQTT cluster (vernemq) in k8s with external access via Ingress (istio).

Without deploying ingress, I can connect (mosquitto_sub) via NodePort or LoadBalancer service.

Istio was installed using istioctl install --set profile=demo

The problem

I am trying to access VerneMQ broker from outside the cluster. Ingress (Istio Gateway) – seems like perfect solution in this case, but I can't establish TCP connection to broker (nor via ingress IP, nor directly via svc/vernemq IP).

So, how do I established this TCP connection from external client through Istio ingress?

What I tried

I've created two namespaces:

  • exposed-with-istio – with istio proxy injection
  • exposed-with-loadbalancer - without istio proxy

Within exposed-with-loadbalancer namespace I deployed vernemq with LoadBalancer Service. It works, this is how I know VerneMQ can be accessed (with mosquitto_sub -h <host> -p 1883 -t hello, host is ClusterIP or ExternalIP of the svc/vernemq). Dashboard is accessible at host:8888/status, 'Clients online' increments on the dashboard.

Within exposed-with-istio I deployed vernemq with ClusterIP Service, Istios Gateway and VirtualService. Immediately istio-after proxy injection, mosquitto_sub can't subscribe through the svc/vernemq IP, nor through the istio ingress (gateway) IP. Command just hangs forever, constantly retrying. Meanwhile vernemq dashboard endpoint is accessible through both service ip and istio gateway.

I guess istio proxy must be configured for mqtt to work.

Here is istio-ingressgateway service:

kubectl describe svc/istio-ingressgateway -n istio-system

Name:                     istio-ingressgateway
Namespace:                istio-system
Labels:                   app=istio-ingressgateway
                          install.operator.istio.io/owning-resource=installed-state
                          install.operator.istio.io/owning-resource-namespace=istio-system
                          istio=ingressgateway
                          istio.io/rev=default
                          operator.istio.io/component=IngressGateways
                          operator.istio.io/managed=Reconcile
                          operator.istio.io/version=1.7.0
                          release=istio
Annotations:              Selector:  app=istio-ingressgateway,istio=ingressgateway
Type:                     LoadBalancer
IP:                       10.100.213.45
LoadBalancer Ingress:     192.168.100.240
Port:                     status-port  15021/TCP
TargetPort:               15021/TCP
Port:                     http2  80/TCP
TargetPort:               8080/TCP
Port:                     https  443/TCP
TargetPort:               8443/TCP
Port:                     tcp  31400/TCP
TargetPort:               31400/TCP
Port:                     tls  15443/TCP
TargetPort:               15443/TCP
Session Affinity:         None
External Traffic Policy:  Cluster
...

Here are debug logs from istio-proxy kubectl logs svc/vernemq -n test istio-proxy

2020-08-24T07:57:52.294477Z debug   envoy filter    original_dst: New connection accepted
2020-08-24T07:57:52.294516Z debug   envoy filter    tls inspector: new connection accepted
2020-08-24T07:57:52.294532Z debug   envoy filter    http inspector: new connection accepted
2020-08-24T07:57:52.294580Z debug   envoy filter    [C5645] new tcp proxy session
2020-08-24T07:57:52.294614Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294638Z debug   envoy pool  creating a new connection
2020-08-24T07:57:52.294671Z debug   envoy pool  [C5646] connecting
2020-08-24T07:57:52.294684Z debug   envoy connection    [C5646] connecting to 127.0.0.1:1883
2020-08-24T07:57:52.294725Z debug   envoy connection    [C5646] connection in progress
2020-08-24T07:57:52.294746Z debug   envoy pool  queueing request due to no available connections
2020-08-24T07:57:52.294750Z debug   envoy conn_handler  [C5645] new connection
2020-08-24T07:57:52.294768Z debug   envoy connection    [C5646] delayed connection error: 111
2020-08-24T07:57:52.294772Z debug   envoy connection    [C5646] closing socket: 0
2020-08-24T07:57:52.294783Z debug   envoy pool  [C5646] client disconnected
2020-08-24T07:57:52.294790Z debug   envoy filter    [C5645] Creating connection to cluster inbound|1883|mqtt|vernemq.test.svc.cluster.local
2020-08-24T07:57:52.294794Z debug   envoy connection    [C5645] closing data_to_write=0 type=1
2020-08-24T07:57:52.294796Z debug   envoy connection    [C5645] closing socket: 1
2020-08-24T07:57:52.294864Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=12
2020-08-24T07:57:52.294882Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=16
2020-08-24T07:57:52.294885Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=20
2020-08-24T07:57:52.294887Z debug   envoy wasm  wasm log: [extensions/stats/plugin.cc:609]::report() metricKey cache hit , stat=24
2020-08-24T07:57:52.294891Z debug   envoy conn_handler  [C5645] adding to cleanup list
2020-08-24T07:57:52.294949Z debug   envoy pool  [C5646] connection destroyed

This are logs from istio-ingressagateway. IP 10.244.243.205 belongs to VerneMQ pod, not service (probably it's intended).

2020-08-24T08:48:31.536593Z debug   envoy filter    [C13236] new tcp proxy session
2020-08-24T08:48:31.536702Z debug   envoy filter    [C13236] Creating connection to cluster outbound|1883||vernemq.test.svc.cluster.local
2020-08-24T08:48:31.536728Z debug   envoy pool  creating a new connection
2020-08-24T08:48:31.536778Z debug   envoy pool  [C13237] connecting
2020-08-24T08:48:31.536784Z debug   envoy connection    [C13237] connecting to 10.244.243.205:1883
2020-08-24T08:48:31.537074Z debug   envoy connection    [C13237] connection in progress
2020-08-24T08:48:31.537116Z debug   envoy pool  queueing request due to no available connections
2020-08-24T08:48:31.537138Z debug   envoy conn_handler  [C13236] new connection
2020-08-24T08:48:31.537181Z debug   envoy connection    [C13237] connected
2020-08-24T08:48:31.537204Z debug   envoy pool  [C13237] assigning connection
2020-08-24T08:48:31.537221Z debug   envoy filter    TCP:onUpstreamEvent(), requestedServerName: 
2020-08-24T08:48:31.537880Z debug   envoy misc  Unknown error code 104 details Connection reset by peer
2020-08-24T08:48:31.537907Z debug   envoy connection    [C13237] remote close
2020-08-24T08:48:31.537913Z debug   envoy connection    [C13237] closing socket: 0
2020-08-24T08:48:31.537938Z debug   envoy pool  [C13237] client disconnected
2020-08-24T08:48:31.537953Z debug   envoy connection    [C13236] closing data_to_write=0 type=0
2020-08-24T08:48:31.537958Z debug   envoy connection    [C13236] closing socket: 1
2020-08-24T08:48:31.538156Z debug   envoy conn_handler  [C13236] adding to cleanup list
2020-08-24T08:48:31.538191Z debug   envoy pool  [C13237] connection destroyed

My configurations

vernemq-istio-ingress.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-istio
  labels:
    istio-injection: enabled
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vernemq
  namespace: exposed-with-istio
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
rules:
  - apiGroups: ["", "extensions", "apps"]
    resources: ["endpoints", "deployments", "replicasets", "pods"]
    verbs: ["get", "list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: endpoint-reader
  namespace: exposed-with-istio
subjects:
  - kind: ServiceAccount
    name: vernemq
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: endpoint-reader
---
apiVersion: v1
kind: Service
metadata:
  name: vernemq
  namespace: exposed-with-istio
  labels:
    app: vernemq
spec:
  selector:
    app: vernemq
  type: ClusterIP
  ports:
    - port: 4369
      name: empd
    - port: 44053
      name: vmq
    - port: 8888
      name: http-dashboard
    - port: 1883
      name: tcp-mqtt
      targetPort: 1883
    - port: 9001
      name: tcp-mqtt-ws
      targetPort: 9001
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vernemq
  namespace: exposed-with-istio
spec:
  replicas: 1
  selector:
    matchLabels:
      app: vernemq
  template:
    metadata:
      labels:
        app: vernemq
    spec:
      serviceAccountName: vernemq
      containers:
        - name: vernemq
          image: vernemq/vernemq
          ports:
            - containerPort: 1883
              name: tcp-mqtt
              protocol: TCP
            - containerPort: 8080
              name: tcp-mqtt-ws
            - containerPort: 8888
              name: http-dashboard
            - containerPort: 4369
              name: epmd
            - containerPort: 44053
              name: vmq
            - containerPort: 9100-9109 # shortened
          env:
            - name: DOCKER_VERNEMQ_ACCEPT_EULA
              value: "yes"
            - name: DOCKER_VERNEMQ_ALLOW_ANONYMOUS
              value: "on"
            - name: DOCKER_VERNEMQ_listener__tcp__allowed_protocol_versions
              value: "3,4,5"
            - name: DOCKER_VERNEMQ_allow_register_during_netsplit
              value: "on"
            - name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
              value: "1"
            - name: DOCKER_VERNEMQ_KUBERNETES_APP_LABEL
              value: "vernemq"
            - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MINIMUM
              value: "9100"
            - name: DOCKER_VERNEMQ_ERLANG__DISTRIBUTION__PORT_RANGE__MAXIMUM
              value: "9109"
            - name: DOCKER_VERNEMQ_KUBERNETES_INSECURE
              value: "1"
vernemq-loadbalancer-service.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: exposed-with-loadbalancer
---
... the rest it the same except for namespace and service type ...
istio.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: vernemq-destination
  namespace: exposed-with-istio
spec:
  host: vernemq.exposed-with-istio.svc.cluster.local
  trafficPolicy:
    tls:
      mode: DISABLE
---
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: vernemq-gateway
  namespace: exposed-with-istio
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - "*"
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vernemq-virtualservice
  namespace: exposed-with-istio
spec:
  hosts:
  - "*"
  gateways:
  - vernemq-gateway
  http:
  - match:
    - uri:
        prefix: /status
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 8888
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: vernemq.exposed-with-istio.svc.cluster.local
        port:
          number: 1883

Does Kiali screenshot imply that ingressgateway only forwards HTTP traffic to the service and eats all TCP? Kiali graph tab

UPD

Following suggestion, here's the output:

** But your envoy logs reveal a problem: envoy misc Unknown error code 104 details Connection reset by peer and envoy pool [C5648] client disconnected.

istioctl proxy-config listeners vernemq-c945876f-tvvz7.exposed-with-istio

first with | grep 8888 and | grep 1883

0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
...                                            Cluster: outbound|853||istiod.istio-system.svc.cluster.local
10.107.205.214 1883  ALL                                                     Cluster: outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.108.218.134 3000  App: HTTP                                               Route: grafana.istio-system.svc.cluster.local:3000
10.108.218.134 3000  ALL                                                     Cluster: outbound|3000||grafana.istio-system.svc.cluster.local
10.107.205.214 4369  App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:4369
10.107.205.214 4369  ALL                                                     Cluster: outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        8888  App: HTTP                                               Route: 8888
0.0.0.0        8888  ALL                                                     PassthroughCluster
10.107.205.214 9001  ALL                                                     Cluster: outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        9090  App: HTTP                                               Route: 9090
0.0.0.0        9090  ALL                                                     PassthroughCluster
10.96.0.10     9153  App: HTTP                                               Route: kube-dns.kube-system.svc.cluster.local:9153
10.96.0.10     9153  ALL                                                     Cluster: outbound|9153||kube-dns.kube-system.svc.cluster.local
0.0.0.0        9411  App: HTTP                                               ...
0.0.0.0        15006 Trans: tls; App: TCP TLS; Addr: 0.0.0.0/0               InboundPassthroughClusterIpv4
0.0.0.0        15006 Addr: 0.0.0.0/0                                         InboundPassthroughClusterIpv4
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls; App: TCP TLS                                Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 Trans: tls                                              Cluster: inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 ALL                                                     Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15006 App: TCP TLS                                            Cluster: inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
0.0.0.0        15010 App: HTTP                                               Route: 15010
0.0.0.0        15010 ALL                                                     PassthroughCluster
10.106.166.154 15012 ALL                                                     Cluster: outbound|15012||istiod.istio-system.svc.cluster.local
0.0.0.0        15014 App: HTTP                                               Route: 15014
0.0.0.0        15014 ALL                                                     PassthroughCluster
0.0.0.0        15021 ALL                                                     Inline Route: /healthz/ready*
10.100.213.45  15021 App: HTTP                                               Route: istio-ingressgateway.istio-system.svc.cluster.local:15021
10.100.213.45  15021 ALL                                                     Cluster: outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
0.0.0.0        15090 ALL                                                     Inline Route: /stats/prometheus*
10.100.213.45  15443 ALL                                                     Cluster: outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.105.193.108 15443 ALL                                                     Cluster: outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
0.0.0.0        20001 App: HTTP                                               Route: 20001
0.0.0.0        20001 ALL                                                     PassthroughCluster
10.100.213.45  31400 ALL                                                     Cluster: outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.107.205.214 44053 App: HTTP                                               Route: vernemq.exposed-with-istio.svc.cluster.local:44053
10.107.205.214 44053 ALL                                                     Cluster: outbound|44053||vernemq.exposed-with-istio.svc.cluster.local

** Furthermore please run: istioctl proxy-config endpoints and istioctl proxy-config routes .

istioctl proxy-config endpoints vernemq-c945876f-tvvz7.exposed-with-istio grep 1883

10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
ENDPOINT                         STATUS      OUTLIER CHECK     CLUSTER
10.101.200.113:9411              HEALTHY     OK                zipkin
10.106.166.154:15012             HEALTHY     OK                xds-grpc
10.211.55.14:6443                HEALTHY     OK                outbound|443||kubernetes.default.svc.cluster.local
10.244.243.193:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.193:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.195:53                HEALTHY     OK                outbound|53||kube-dns.kube-system.svc.cluster.local
10.244.243.195:9153              HEALTHY     OK                outbound|9153||kube-dns.kube-system.svc.cluster.local
10.244.243.197:15010             HEALTHY     OK                outbound|15010||istiod.istio-system.svc.cluster.local
10.244.243.197:15012             HEALTHY     OK                outbound|15012||istiod.istio-system.svc.cluster.local
10.244.243.197:15014             HEALTHY     OK                outbound|15014||istiod.istio-system.svc.cluster.local
10.244.243.197:15017             HEALTHY     OK                outbound|443||istiod.istio-system.svc.cluster.local
10.244.243.197:15053             HEALTHY     OK                outbound|853||istiod.istio-system.svc.cluster.local
10.244.243.198:8080              HEALTHY     OK                outbound|80||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:8443              HEALTHY     OK                outbound|443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.198:15443             HEALTHY     OK                outbound|15443||istio-egressgateway.istio-system.svc.cluster.local
10.244.243.199:8080              HEALTHY     OK                outbound|80||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:8443              HEALTHY     OK                outbound|443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15021             HEALTHY     OK                outbound|15021||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:15443             HEALTHY     OK                outbound|15443||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.199:31400             HEALTHY     OK                outbound|31400||istio-ingressgateway.istio-system.svc.cluster.local
10.244.243.201:3000              HEALTHY     OK                outbound|3000||grafana.istio-system.svc.cluster.local
10.244.243.202:9411              HEALTHY     OK                outbound|9411||zipkin.istio-system.svc.cluster.local
10.244.243.202:16686             HEALTHY     OK                outbound|80||tracing.istio-system.svc.cluster.local
10.244.243.203:9090              HEALTHY     OK                outbound|9090||kiali.istio-system.svc.cluster.local
10.244.243.203:20001             HEALTHY     OK                outbound|20001||kiali.istio-system.svc.cluster.local
10.244.243.204:9090              HEALTHY     OK                outbound|9090||prometheus.istio-system.svc.cluster.local
10.244.243.206:1883              HEALTHY     OK                outbound|1883||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:4369              HEALTHY     OK                outbound|4369||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:8888              HEALTHY     OK                outbound|8888||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:9001              HEALTHY     OK                outbound|9001||vernemq.exposed-with-istio.svc.cluster.local
10.244.243.206:44053             HEALTHY     OK                outbound|44053||vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:1883                   HEALTHY     OK                inbound|1883|tcp-mqtt|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:4369                   HEALTHY     OK                inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:8888                   HEALTHY     OK                inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:9001                   HEALTHY     OK                inbound|9001|tcp-mqtt-ws|vernemq.exposed-with-istio.svc.cluster.local
127.0.0.1:15000                  HEALTHY     OK                prometheus_stats
127.0.0.1:15020                  HEALTHY     OK                agent
127.0.0.1:44053                  HEALTHY     OK                inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local
unix://./etc/istio/proxy/SDS     HEALTHY     OK                sds-grpc

istioctl proxy-config routes vernemq-c945876f-tvvz7.exposed-with-istio

NOTE: This output only contains routes loaded via RDS.
NAME                                                                         DOMAINS                               MATCH                  VIRTUAL SERVICE
istio-ingressgateway.istio-system.svc.cluster.local:15021                    istio-ingressgateway.istio-system     /*                     
istiod.istio-system.svc.cluster.local:853                                    istiod.istio-system                   /*                     
20001                                                                        kiali.istio-system                    /*                     
15010                                                                        istiod.istio-system                   /*                     
15014                                                                        istiod.istio-system                   /*                     
vernemq.exposed-with-istio.svc.cluster.local:4369                            vernemq                               /*                     
vernemq.exposed-with-istio.svc.cluster.local:44053                           vernemq                               /*                     
kube-dns.kube-system.svc.cluster.local:9153                                  kube-dns.kube-system                  /*                     
8888                                                                         vernemq                               /*                     
80                                                                           istio-egressgateway.istio-system      /*                     
80                                                                           istio-ingressgateway.istio-system     /*                     
80                                                                           tracing.istio-system                  /*                     
grafana.istio-system.svc.cluster.local:3000                                  grafana.istio-system                  /*                     
9411                                                                         zipkin.istio-system                   /*                     
9090                                                                         kiali.istio-system                    /*                     
9090                                                                         prometheus.istio-system               /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|8888|http-dashboard|vernemq.exposed-with-istio.svc.cluster.local     *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
inbound|44053|vmq|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
                                                                             *                                     /stats/prometheus*     
InboundPassthroughClusterIpv4                                                *                                     /*                     
inbound|4369|empd|vernemq.exposed-with-istio.svc.cluster.local               *                                     /*                     
InboundPassthroughClusterIpv4                                                *                                     /*                     
                                                                             *                                     /healthz/ready*    
4

4 Answers

2
votes

First I would recommand to enable envoy logging for the pod

kubectl exec -it <pod-name> -c istio-proxy -- curl -X POST http://localhost:15000/logging?level=trace

No follow the istio sidecar logs by

kubectl logs <pod-name> -c isito-proxy -f

Update

Since your envoy proxy logs a problem on both sides, the connection works, but can not be established.

Regarding port 15006: In istio all traffic is routed over the envoy proxy (istio-sidecar). For that istio maps each port with on 15006 for inbound (meaning all incoming traffic to the sidecar from somewhere) and on 15001 for outbound (meaning from the sidecar to somewhere). More on that here: https://istio.io/latest/docs/ops/diagnostic-tools/proxy-cmd/#deep-dive-into-envoy-configuration

The config of istioctl proxy-config listeners <pod-name> looks got so far. Let's try to find the error.

Istio is sometimes very strict with it's configuration requirments. To rule that out, could you please first adjust your service to type: ClusterIP and add a targetport for the mqtt port:

- port: 1883
  name: tcp-mqtt
  targetPort: 1883

Furthermore please run: istioctl proxy-config endpoints <pod-name>and istioctl proxy-config routes <pod-name>.

2
votes

I had the same problem when using VerneMQ over an istio gateway. The problem was that the VerneMQ process resets the TCP connection if the listener.tcp.default contains the default value of 127.0.0.1:1883. I fixed it by using DOCKER_VERNEMQ_LISTENER__TCP__DEFAULT with "0.0.0.0:1883".

1
votes

Based on your configuration I would say you would have to use ServiceEntry to enable communication between pods in the mesh and pods outside the mesh.

ServiceEntry enables adding additional entries into Istio’s internal service registry, so that auto-discovered services in the mesh can access/route to these manually specified services. A service entry describes the properties of a service (DNS name, VIPs, ports, protocols, endpoints). These services could be external to the mesh (e.g., web APIs) or mesh-internal services that are not part of the platform’s service registry (e.g., a set of VMs talking to services in Kubernetes). In addition, the endpoints of a service entry can also be dynamically selected by using the workloadSelector field. These endpoints can be VM workloads declared using the WorkloadEntry object or Kubernetes pods. The ability to select both pods and VMs under a single service allows for migration of services from VMs to Kubernetes without having to change the existing DNS names associated with the services.

You use a service entry to add an entry to the service registry that Istio maintains internally. After you add the service entry, the Envoy proxies can send traffic to the service as if it was a service in your mesh. Configuring service entries allows you to manage traffic for services running outside of the mesh

For more informations and more examples visit istio documentation about service entry here and here.


Let me know if you have any more questions.

0
votes

Here's my Gateway, VirtualService and Service config that work without issues

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: mqtt-domain-tld-gw
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 31400
      name: tcp
      protocol: TCP
    hosts:
    - mqtt.domain.tld
  - port:
      number: 15443
      name: tls
      protocol: TLS
    hosts:
    - mqtt.domain.tld
    tls:
      mode: PASSTHROUGH
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: mqtt-domain-tld-vs
spec:
  hosts:
  - mqtt.domain.tld
  gateways:
  - mqtt-domain-tld-gw
  tcp:
  - match:
    - port: 31400
    route:
    - destination:
        host: mqtt
        port:
          number: 1883
  tls:
  - match:
    - port: 15443
      sniHosts:
      - mqtt.domain.tld
    route:
    - destination:
        host: mqtt
        port:
          number: 8883
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: mqtt
  name: mqtt
spec:
  ports:
  - name: tcp-mqtt
    port: 1883
    protocol: TCP
    targetPort: 1883
    appProtocol: tcp
  - name: tls-mqtt
    port: 8883
    protocol: TCP
    targetPort: 8883
    appProtocol: tls
  selector:
    app: mqtt
  type: LoadBalancer