1
votes

I have created 2 tenants(tenant1,tenant2) in 2 namespaces tenant1-namespace,tenant2-namespace

Each tenant has db pod and its services

How to isolate db pods/service i.e. how to restrict pod/service from his namespace to access other tenants db pods ?

I have used service account for each tenant and applied network policies so that namespaces are isolated.

kubectl get svc --all-namespaces

tenant1-namespace   grafana-app            LoadBalancer   10.64.7.233    104.x.x.x   3000:31271/TCP   92m
tenant1-namespace   postgres-app           NodePort       10.64.2.80     <none>      5432:31679/TCP   92m
tenant2-namespace   grafana-app            LoadBalancer   10.64.14.38    35.x.x.x    3000:32226/TCP   92m
tenant2-namespace   postgres-app           NodePort       10.64.2.143    <none>      5432:31912/TCP   92m

So

I want to restrict grafana-app to use only his postgres db in his namespace only, not in other namespace.

But problem is that using DNS qualified service name (app-name.namespace-name.svc.cluster.local) its allowing to access each other db pods (grafana-app in namespace tenant1-namespace can have access to postgres db in other tenant2-namespace via postgres-app.tenant2-namespace.svc.cluster.local

Updates : network policies

1)

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

2)

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-allow-external
spec:
  podSelector:
    matchLabels:
      app: grafana-app
  ingress:
  - from: []
1

1 Answers

1
votes
  • Your NetworkPolicy objects are correct, I created an example with them and will demonstrate bellow.

  • If you still have access to the service on the other namespace using FQDN, your NetworkPolicy may not be fully enabled on your cluster.

Run gcloud container clusters describe "CLUSTER_NAME" --zone "ZONE" and look for these two snippets:

  • At the beggining of the description it shows if the NetworkPolicy Plugin is enabled at Master level, it should be like this:
addonsConfig:
networkPolicyConfig: {}
  • At the middle of the description, you can find if the NetworkPolicy is enabled on the nodes. It should look like this:
name: cluster-1
network: default
networkConfig:
  network: projects/myproject/global/networks/default
  subnetwork: projects/myproject/regions/us-central1/subnetworks/default
networkPolicy:
  enabled: true
  provider: CALICO

Reproduction:

  • I'll create a simple example, I'll use gcr.io/google-samples/hello-app:1.0 image for tenant1 and gcr.io/google-samples/hello-app:2.0 for tenant2, so it's simplier to see where it's connecting but i'll use the names of your environment:
$ kubectl create namespace tenant1
namespace/tenant1 created
$ kubectl create namespace tenant2
namespace/tenant2 created

$ kubectl run -n tenant1 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0 
pod/grafana-app created
$ kubectl run -n tenant1 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:1.0 
pod/postgres-app created

$ kubectl run -n tenant2 grafana-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0 
pod/grafana-app created
$ kubectl run -n tenant2 postgres-app --generator=run-pod/v1 --image=gcr.io/google-samples/hello-app:2.0 
pod/postgres-app created

$ kubectl expose pod -n tenant1 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant1 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed

$ kubectl expose pod -n tenant2 grafana-app --port=8080 --type=LoadBalancer
service/grafana-app exposed
$ kubectl expose pod -n tenant2 postgres-app --port=8080 --type=NodePort
service/postgres-app exposed

$ kubectl get all -o wide -n tenant1
NAME               READY   STATUS    RESTARTS   AGE    IP          NODE                                         
pod/grafana-app    1/1     Running   0          100m   10.48.2.4   gke-cluster-114-default-pool-e5df7e35-ez7s
pod/postgres-app   1/1     Running   0          100m   10.48.0.6   gke-cluster-114-default-pool-e5df7e35-c68o

NAME                   TYPE           CLUSTER-IP   EXTERNAL-IP     PORT(S)          AGE   SELECTOR
service/grafana-app    LoadBalancer   10.1.23.39   34.72.118.149   8080:31604/TCP   77m   run=grafana-app
service/postgres-app   NodePort       10.1.20.92   <none>          8080:31033/TCP   77m   run=postgres-app

$ kubectl get all -o wide -n tenant2
NAME               READY   STATUS    RESTARTS   AGE    IP          NODE                                         
pod/grafana-app    1/1     Running   0          76m    10.48.4.8   gke-cluster-114-default-pool-e5df7e35-ol8n
pod/postgres-app   1/1     Running   0          100m   10.48.4.5   gke-cluster-114-default-pool-e5df7e35-ol8n

NAME                   TYPE           CLUSTER-IP    EXTERNAL-IP      PORT(S)          AGE   SELECTOR
service/grafana-app    LoadBalancer   10.1.17.50    104.154.135.69   8080:30534/TCP   76m   run=grafana-app
service/postgres-app   NodePort       10.1.29.215   <none>           8080:31667/TCP   77m   run=postgres-app
  • Now, let's deploy your two rules: The first blocking all traffic from outside the namespace, the second allowing ingress the grafana-app from outside of the namespace:
$ cat default-deny-other-ns.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-from-other-namespaces
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}

$ cat allow-grafana-ingress.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: web-allow-external
spec:
  podSelector:
    matchLabels:
      run: grafana-app
  ingress:
  - from: []

By default, pods are non-isolated; they accept traffic from any source.

Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy. (Other pods in the namespace that are not selected by any NetworkPolicy will continue to accept all traffic.)

Network policies do not conflict; they are additive. If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules. Thus, order of evaluation does not affect the policy result.

  • Then we will apply the rules on both namespaces because the scope of the rule is the namespace it's assigned to:
$ kubectl apply -n tenant1 -f default-deny-other-ns.yaml 
networkpolicy.networking.k8s.io/deny-from-other-namespaces created
$ kubectl apply -n tenant2 -f default-deny-other-ns.yaml 
networkpolicy.networking.k8s.io/deny-from-other-namespaces created

$ kubectl apply -n tenant1 -f allow-grafana-ingress.yaml 
networkpolicy.networking.k8s.io/web-allow-external created
$ kubectl apply -n tenant2 -f allow-grafana-ingress.yaml 
networkpolicy.networking.k8s.io/web-allow-external created
  • Now for final testing, I'll log inside grafana-app in tenant1 and try to reach the postgres-app in both namespaces and check the output:
$ kubectl exec -n tenant1 -it grafana-app -- /bin/sh
/ ### POSTGRES SAME NAMESPACE ###
/ # wget -O- postgres-app:8080
Connecting to postgres-app:8080 (10.1.20.92:8080)
Hello, world!
Version: 1.0.0
Hostname: postgres-app

/ ### GRAFANA OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://grafana-app.tenant2.svc.cluster.local:8080
Connecting to grafana-app.tenant2.svc.cluster.local:8080 (10.1.17.50:8080)
Hello, world!
Version: 2.0.0
Hostname: grafana-app

/ ### POSTGRES OTHER NAMESPACE ###
/ # wget -O- --timeout=1 http://postgres-app.tenant2.svc.cluster.local:8080
Connecting to postgres-app.tenant2.svc.cluster.local:8080 (10.1.29.215:8080)
wget: download timed out
  • You can see that the DNS is resolved, but the networkpolicy blocks the access to the backend pods.

If after double checking NetworkPolicy is enabled on Master and Nodes you still face the same issue let me know in the comments and we can dig further.