I guess the HTTP 403
issue might be connected with Istio Authorization or Authentication mesh configurations, assuming that you've successfully injected Envoy sidecar into the particular Pod or widely across related namespaces.
The logs inspection might be most issue explainable task, confirming that Envoy's Access Logs are already enabled, you can look through relevant istio-proxy
sidecar and istio-ingressgateway
Pod logs; whereas you can fetch Envoy proxy response flags and traffic path workflow:
$ kubectl logs -l app=httpbin -c istio-proxy
[2019-03-06T09:31:27.360Z] "GET /status/418 HTTP/1.1" 418 - "-" 0 135
5 2 "-" "curl/7.60.0" "d209e46f-9ed5-9b61-bbdd-43e22662702a"
"httpbin:8000" "127.0.0.1:80"
inbound|8000|http|httpbin.default.svc.cluster.local - 172.30.146.73:80
172.30.146.82:38618 outbound_.8000_._.httpbin.default.svc.cluster.local
Check Authentication Policies within a mesh, that can affect sidecars proxy behavior and revise a global mesh policy in terms of mTLS authentication, Permissive mode is enabled by default:
$ kubectl get policies.authentication.istio.io --all-namespaces
$ kubectl get meshpolicy.authentication.istio.io default -oyaml
If you launched Authorization rules within a mesh, verify all the corresponded RBAC policies:
$ kubectl get clusterrbacconfigs.rbac.istio.io --all-namespaces
$ kubectl get authorizationpolicies.rbac.istio.io,rbacconfigs.rbac.istio.io,servicerolebindings.rbac.istio.io,serviceroles.rbac.istio.io --all-namespaces
Find more related information about troubleshooting steps in the official Istio documentation.
istio-ingresssgateway
. – Vadim Eisenberg