1
votes

We deployed our app, Pod and Service in Azure Kubernetes Service but we cannot connect to our container app from the VM from another Virtual Network. It seems that Service Type internal load balancer cannot be used for those purpose when we want to establish network connection from Azure resource (VM) from another Virtual Network/SubNet, compared to AKS Pod. I read docs https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip but I am still confused how I should modify my YAML definition in order to deploy our app with "Ingress-Controller with type Internal LB". Can you please assist me how YAML should be modified? And do I really need to install something with Helm (I have never used it before) and from where that needs to be installed.. I would like to AVOID that if possible. I do not understand that concept desribed in docs. For us I think it is ok if we have ingress controller in the same namespace.

Thank you!

apiVersion: apps/v1
kind: Deployment
metadata:
  name: fa-ads-deployment
  labels:
    app: fa-ads-deployment
spec:
  replicas: 1
  template:
    metadata:
      name: frontarena-ads-aks-test
      labels:
        app: frontarena-ads-aks-test
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      restartPolicy: Always
      containers:
      - name: frontarena-ads-aks-test
        image: faselect.dev/frontarena/ads:test1
        ports:
          - containerPort: 9000
  selector:
    matchLabels:
      app: frontarena-ads-aks-test
---
apiVersion: v1
kind: Service
metadata:
  name: frontarena-ads-aks-test
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 9000
  selector:
    app: frontarena-ads-aks-test
2

2 Answers

1
votes

The article you shared is creating an Ingress controller and resource. Ingress controllers are not started automatically with a cluster, they act as a reverse proxy and a load balancer and watches Ingress objects from all namespaces. Use this page to choose the ingress controller implementation that best fits your cluster. Each one will have their own configuration. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/.

Once the controller starts to watch the ingress traffic, we need to define the routing rules which is called an ingress resource is where you will define your routing rules, for example, in the below, if the user navigates to test.com/icons then the service test will be called. You do not necessarily have to define the host.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-resource-backend
spec:
  rules:
    - host: test.com
      http:
        paths:
          - path: /icons
            pathType: ImplementationSpecific
            backend:
              service:
                   name: test
                   port:
                     number: 9999

you can check your ingress resource using the below command.

kubectl get ingress 
root@ip-172-31-54-44:/home/ubuntu# k get ingress -n logiq 
NAME                      CLASS    HOSTS           ADDRESS     PORTS   AGE
ingress-resource-backend   public   test.com       127.0.0.1   80      1d

Summary:

  • Choose which ingress controller will suit your project, you do not need to install helm. Directly go the git project and download the project and run kubectl apply -f. for ex azure nginx: https://kubernetes.github.io/ingress-nginx/deploy/#azure controllers might/might not create a namespace called ingress, purely depends on their configuration (minikube ingresss is a simple addon) and will spin up pods on their own without much manual intervention. You can modify the config-map as per your requirement if needed.
  • Once the controller is up and running, write a Ingress resource for it to pick the routing rules.

If you still have doubts, there is a detailed video here. https://www.youtube.com/watch?v=u948CURLDJA

0
votes

It seems that Service Type internal load balancer cannot be used for those purpose when we want to establish network connection from Azure resource (VM) from another Virtual Network/SubNet, compared to AKS Pod.

Want to make an update on this ^

I was able to connect to internal LoadBalancer svc from another Azure VNet (VNets have Vnet Peering established).

I just created SVC as described at https://docs.microsoft.com/en-us/azure/aks/internal-lb#create-an-internal-load-balancer. Got the IP address of ILB (IP from the VNet where AKS cluster is deployed). And then I was able to make HTTP call from the POD in another AKS cluster deployed into another Vnet (which is peered with the first Vnet)