Your NetworkPolicy
objects are correct, I created an example with them and will demonstrate bellow.
If you still have access to the service on the other namespace using FQDN, your NetworkPolicy
may not be fully enabled on your cluster.
Run gcloud container clusters describe "CLUSTER_NAME" --zone "ZONE"
and look for these two snippets:
- At the beggining of the description it shows if the NetworkPolicy Plugin is enabled at Master level, it should be like this:
addonsConfig:
networkPolicyConfig: {}
- At the middle of the description, you can find if the NetworkPolicy is enabled on the nodes. It should look like this:
name: cluster-1
network: default
networkConfig:
network: projects/myproject/global/networks/default
subnetwork: projects/myproject/regions/us-central1/subnetworks/default
networkPolicy:
enabled: true
provider: CALICO
Reproduction:
- I'll create a simple example, I'll use
gcr.io/google-samples/hello-app:1.0
image for tenant1 and gcr.io/google-samples/hello-app:2.0
for tenant2, so it's simplier to see where it's connecting but i'll use the names of your environment:
$ kubectl create namespace tenant1
namespace/tenant1 created
$ kubectl create namespace tenant2
namespace/tenant2 created
$ kubectl run -n tenant1 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/grafana-app created
$ kubectl run -n tenant1 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0
pod/postgres-app created
$ kubectl run -n tenant2 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/grafana-app created
$ kubectl run -n tenant2 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0
pod/postgres-app created
$ kubectl expose pod -n tenant1 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant1 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl expose pod -n tenant2 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant2 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed
$ kubectl get all -o wide -n tenant1
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 100m 10.48.2.4 gke-cluster-114-default-pool-e5df7e35-ez7s
pod/postgres-app 1/1 Running 0 100m 10.48.0.6 gke-cluster-114-default-pool-e5df7e35-c68o
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.23.39 34.72.118.149 8080:31604/TCP 77m run=grafana-app
service/postgres-app NodePort 10.1.20.92 <none> 8080:31033/TCP 77m run=postgres-app
$ kubectl get all -o wide -n tenant2
NAME READY STATUS RESTARTS AGE IP NODE
pod/grafana-app 1/1 Running 0 76m 10.48.4.8 gke-cluster-114-default-pool-e5df7e35-ol8n
pod/postgres-app 1/1 Running 0 100m 10.48.4.5 gke-cluster-114-default-pool-e5df7e35-ol8n
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/grafana-app LoadBalancer 10.1.17.50 104.154.135.69 8080:30534/TCP 76m run=grafana-app
service/postgres-app NodePort 10.1.29.215 <none> 8080:31667/TCP 77m run=postgres-app
- Now, let's deploy your two rules: The first blocking all traffic from outside the namespace, the second allowing ingress the
grafana-app
from outside of the namespace:
$ cat default-deny-other-ns.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: deny-from-other-namespaces
spec:
podSelector:
matchLabels:
ingress:
- from:
- podSelector: {}
$ cat allow-grafana-ingress.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: web-allow-external
spec:
podSelector:
matchLabels:
run: grafana-app
ingress:
- from: []
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)
Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.
- Then we will apply the rules on both namespaces because the scope of the rule is the namespace it's assigned to:
$ kubectl apply -n tenant1 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant2 -f default-deny-other-ns.yaml
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant1 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
$ kubectl apply -n tenant2 -f allow-grafana-ingress.yaml
networkpolicy.networking.k8s.io/web-allow-external created
- Now for final testing, I'll log inside
grafana-app
in tenant1
and try to reach the postgres-app
in both namespaces and check the output:
$ kubectl exec -n tenant1 -it grafana-app -- /bin/sh
/ ### POSTGRES SAME NAMESPACE ###
/ # wget -O- postgres-app:8080
Connecting to postgres-app:8080 (10.1.20.92:8080)
Hello, world!
Version: 1.0.0
Hostname: postgres-app
/ ### GRAFANA OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://grafana-app.tenant2.svc.cluster.local:8080
Connecting to grafana-app.tenant2.svc.cluster.local:8080 (10.1.17.50:8080)
Hello, world!
Version: 2.0.0
Hostname: grafana-app
/ ### POSTGRES OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://postgres-app.tenant2.svc.cluster.local:8080
Connecting to postgres-app.tenant2.svc.cluster.local:8080 (10.1.29.215:8080)
wget: download timed out
- You can see that the DNS is resolved, but the networkpolicy blocks the access to the backend pods.
If after double checking NetworkPolicy is enabled on Master and Nodes you still face the same issue let me know in the comments and we can dig further.