Question
The aim is, that all external requests are routed through the same egress gateway. I have two different ports that I need to reach on the same external service: 8443 and 8888. Reaching port 8443 isn't a problem, but I can't seem to get the port 8888 up and running.
Additional Information
I've changed the IstioOperator so that the ingressGateway and the egressGateway can handle additional ports than only the default ones.
Istio Version: 1.7.0
While everything is fine with the ingressGateway definition (ports are exposed and reachable) it is not the case for the egressGateway. In the egressGateway I still can only reach the default ports. While the service is showing the ports, there is no envoy listening when querying netstat.
Ingress
istio-ingressgateway LoadBalancer X.X.X.X X.X.X.X status-port:15021►32047 http2:80►31704 https:443►30924 tls:15443►30250 tcp-syslog:6514►30382 91d
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:15020 0.0.0.0:* LISTEN 1/pilot-agent
tcp 0 0 0.0.0.0:15021 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:6514 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:15090 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 17/envoy
Egress
istio-egressgateway ClusterIP X.X.X.X http2:80►0 https:443►0 tls:15443►0 https-alt:8888►0
Here I'm missing an envoy listening on port 8888:
istio-proxy@istio-egressgateway-5d6b6df7fd-gzkl6:/$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:15090 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 127.0.0.1:15000 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 17/envoy
tcp 0 0 0.0.0.0:15020 0.0.0.0:* LISTEN 1/pilot-agent
tcp 0 0 0.0.0.0:15021 0.0.0.0:* LISTEN 17/envoy
That's the reason (I think) that I'm getting a Failed to connect to upstream error (I've already checked mutual TLS configuration conflicts) - And my config for the 443 port is working:
istio-proxy [2020-12-09T12:09:46.337Z] "- - -" 0 UF,URX "-" "-" 0 0 3 - "-" "-" "-" "-" "x.x.x.x:8888" outbound|8888||istio-egressgateway.istio-system.svc.cluster.local - x.x.x.x:8888 x.x.x.x:43362 - -
Istio Operator
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
tag: 1.7.0
meshConfig:
# Enable SDS for proxies
defaultConfig:
sds:
enabled: true
# Enable access logging for proxies
accessLogFile: /dev/stdout
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
service:
# Set designated IP for ingress gateways
loadBalancerIP: "{{ .Values.istioIngressgatewayIp }}"
ports:
# WARNING: Include default ports because Helm replaces the `ports` value
- port: 15021
targetPort: 15021
name: status-port
- port: 80
targetPort: 8080
name: http2
- port: 443
targetPort: 8443
name: https
- port: 15443
targetPort: 15443
name: tls
# Enable syslog port
- port: 6514
targetPort: 6514
name: tcp-syslog
hpaSpec:
minReplicas: 2
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
service:
ports:
- name: http2
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
- name: tls
port: 15443
targetPort: 15443
- name: https-alt
port: 8888
targetPort: 8888
hpaSpec:
minReplicas: 2
Do I miss something? Do I need to restart something what I've forgot after adding the additional port for the egress gateway? Or is that not possible to do?
Gateway
The used Gateway definition
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: t-egressgateway-$(ENVIRONMENT)
spec:
selector:
istio: egressgateway
servers:
- port:
number: 443
name: tls-t-egressgateway-$(ENVIRONMENT)
protocol: TLS
hosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
tls:
mode: PASSTHROUGH
- port:
number: 8888
name: tls-t-egressgateway-8888-$(ENVIRONMENT)
protocol: TLS
hosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
tls:
mode: PASSTHROUGH
Kubectl Describe
Name: istio-egressgateway-5d6b6df7fd-gzkl6
Namespace: istio-system
Priority: 0
Node: 9ad0dc92-6865-44a5-9c73-8a99ac9c9f82/xx.x.xx.xx
Start Time: Wed, 09 Dec 2020 11:19:58 +0100
Labels: app=istio-egressgateway
chart=gateways
heritage=Tiller
istio=egressgateway
pod-template-hash=5d6b6df7fd
release=istio
service.istio.io/canonical-name=istio-egressgateway
service.istio.io/canonical-revision=latest
Annotations: k8s.v1.cni.cncf.io/network-status:
[
{
"interface": "eth0",
"mac": "xx:xx:xx:xx:xx:xx",
"ip": "xx.xx.x.xx/24",
"name": "cluster-wide-default",
"gateway_ip": "xx.xx.x.x",
"attachment_id": "8a86dc06-a7d2-4808-98f9-67dce50caec4",
"default": true,
"vlan_id": 24
}
]
kubectl.kubernetes.io/restartedAt: 2020-11-19T07:59:59+02:00
kubernetes.io/psp: pks-privileged
prometheus.io/path: /stats/prometheus
prometheus.io/port: 15090
prometheus.io/scrape: true
sidecar.istio.io/inject: false
Status: Running
IP: xx.xx.x.xx
IPs:
IP: xx.xx.x.xx
Controlled By: ReplicaSet/istio-egressgateway-5d6b6df7fd
Containers:
istio-proxy:
Container ID: docker://2ad05a18267b829f381c18c7076a08fced64cd9b0990d3290115d6579fb16e12
Image: x/istio/proxyv2:1.7.0
Image ID: docker-pullable://x/istio/proxyv2@sha256:c1f1b45a4162509f86aa82d0148aef55824454e7204f27f23dddc9d7f4ae7cd1
Ports: 8080/TCP, 8443/TCP, 15443/TCP, 8888/TCP, 15090/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
proxy
router
--domain
$(POD_NAMESPACE).svc.cluster.local
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--log_output_level=default:info
--serviceCluster
istio-egressgateway
--trust-domain=cluster.local
State: Running
Started: Wed, 09 Dec 2020 11:20:02 +0100
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 100m
memory: 128Mi
Readiness: http-get http://:15021/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
Environment:
JWT_POLICY: first-party-jwt
PILOT_CERT_PROVIDER: istiod
CA_ADDR: istiod.istio-system.svc:15012
NODE_NAME: (v1:spec.nodeName)
POD_NAME: istio-egressgateway-5d6b6df7fd-gzkl6 (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
HOST_IP: (v1:status.hostIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
CANONICAL_SERVICE: (v1:metadata.labels['service.istio.io/canonical-name'])
CANONICAL_REVISION: (v1:metadata.labels['service.istio.io/canonical-revision'])
ISTIO_META_WORKLOAD_NAME: istio-egressgateway
ISTIO_META_OWNER: kubernetes://apis/apps/v1/namespaces/istio-system/deployments/istio-egressgateway
ISTIO_META_MESH_ID: cluster.local
ISTIO_META_ROUTER_MODE: sni-dnat
ISTIO_META_CLUSTER_ID: Kubernetes
Mounts:
/etc/istio/config from config-volume (rw)
/etc/istio/egressgateway-ca-certs from egressgateway-ca-certs (ro)
/etc/istio/egressgateway-certs from egressgateway-certs (ro)
/etc/istio/pod from podinfo (rw)
/etc/istio/proxy from istio-envoy (rw)
/var/run/ingress_gateway from gatewaysdsudspath (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from istio-egressgateway-service-account-token-hmkc9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
podinfo:
Type: DownwardAPI (a volume populated by information about the pod)
Items:
metadata.labels -> labels
metadata.annotations -> annotations
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
gatewaysdsudspath:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio
Optional: true
egressgateway-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio-egressgateway-certs
Optional: true
egressgateway-ca-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio-egressgateway-ca-certs
Optional: true
istio-egressgateway-service-account-token-hmkc9:
Type: Secret (a volume populated by a Secret)
SecretName: istio-egressgateway-service-account-token-hmkc9
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Service Entry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: t-se-$(ENVIRONMENT)
spec:
hosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
ports:
- number: 443
name: https-t-se-$(ENVIRONMENT)
protocol: HTTPS
- number: 8888
name: https-t-se2-$(ENVIRONMENT)
protocol: HTTPS
resolution: DNS
location: MESH_INTERNAL
VM Service Entry
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: t-vm-se-$(ENVIRONMENT)
spec:
hosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
ports:
- number: 8443
name: https-t-vm-se-$(ENVIRONMENT)
protocol: HTTPS
- number: 8888
name: https-t-vm-se2-$(ENVIRONMENT)
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
Virtual Service
Connections over port 443 are working as expected
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: t-vs-$(ENVIRONMENT)
spec:
hosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
gateways:
- t-egressgateway-$(ENVIRONMENT)
- mesh
exportTo:
- .
tls:
- match:
- gateways:
- mesh
port: 443
sniHosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 443
weight: 100
- match:
- gateways:
- t-egressgateway-$(ENVIRONMENT)
port: 443
sniHosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
route:
- destination:
host: t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
port:
number: 8443
weight: 100
- match:
- gateways:
- mesh
port: 8888
sniHosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 8888
weight: 100
- match:
- gateways:
- t-egressgateway-$(ENVIRONMENT)
port: 8888
sniHosts:
- t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
route:
- destination:
host: t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL)
port:
number: 8888
weight: 100
Getting Other Error Message
When changing the targetPort in the istioOperator from 8888 to 8443 the log output from the istio-sidecare shifts from:
istio-proxy [2020-12-10T19:40:54.724Z] "- - -" 0 UF,URX "-" "-" 0 0 2 - "-" "-" "-" "-" "xx.xx.x.x:8888" outbound|8888||istio-egressgateway.istio-system.svc.cluster.local - xx.xxx.xx.xxx:8888 xx.xx.x.x:34296 - -
to
istio-proxy [2020-12-10T19:43:08.341Z] "- - -" 0 - "-" "-" 517 0 1 - "-" "-" "-" "-" "xx.xx.x.x:8443" outbound|8888||istio-egressgateway.istio-system.svc.cluster.local xx.xx.x.x:46252 xx.xxx.xx.xxx:8888 xx.xx.x.x:36660 t.$(ENVIRONMENT).$(BASE_DOMAIN_INTERNAL) -
But I can't see an entry in the egress-gateway log.
Problem now is probably, that the istio-egress pod ports now looks like this:
containers:
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 15443
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
Where it looked like this before:
containers:
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 15443
protocol: TCP
- containerPort: 8888
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
kubectl describe
of your egress gateway? – Jakub