0
votes

On AKS I have a service of type LoadBalancer with 2 ports defined, one for general access (and two-way authentication) and the other for exclusive access from a Service Fabric cluster also on Azure. To achieve the exclusive access I changed the inbound rule on the VMs to only allow the SF Cluster to access. The problem is that I see often that the rule is reset to default, presumably because of a deployment that modifies the AKS service from Azure DevOps (although the LoadBalancer object never changes)

The LoadBalancer configuration looks like this:

apiVersion: v1
kind: Service
metadata:
  name: myservice-loadbalancer
spec:
  ports:
  - name: public-port
    port: 1234
    targetPort: public-port
  - name: service-fabric-port
    port: 4321
    targetPort: service-fabric-port
  selector:
    app: myservice
  type: LoadBalancer

A possible workaround is to add the allowed IP to the LoadBalancer object, as recommended here: https://github.com/Azure/AKS/issues/570#issuecomment-413299212, but in my case that will limit the "public-port" also.

I cannot think on a different way out than creating two LoadBalancer objects, one per port. But it is not a clean workaround as: the service is the same only through two different ports, and this way I would have two IPs for the same service. Also and as mentioned on the link above changes to the inbound rules should be persistent.

* UPDATE * I change the inbound rule in the Network security group created for the aks cluster, aks-agentpool--nsg. These are the rules reset periodically by a culprit I cannot identify but I assume is the DevOps deployment.

2
why dont you create an service internal lb that SF can use to talk to kubernetes?4c74356b41

2 Answers

0
votes

how about adding the rules to the nsg of the subnet to which AKS is deployed to? That wont default when node restarts.

0
votes

As the other answer mentioned, the solution was to add new rules on the Network Security Group with a higher priority than the existing one.

For example, the LoadBalancer service above will create one rule for TCP port 4321, allowing any internet source to access that port on the public ip given to the service. Let's say the priority given to that rule is 500. I can change that rule, but it will be reset later.

I have to add two more rules with higher priorities, let's say 400 and 401. Both rules have as destination the public IP of the service and port 4321. Rule 400 will allow access to the Service Tag ServiceFabric, while rule 401 will deny access to the Service Tag Internet.

Rules will be evaluated in the order 500, 401 and 400, so finally only Service Fabric will be able to access that port. Rules 400 and 401 are not created by Azure so they won't change.