On AKS I have a service of type LoadBalancer with 2 ports defined, one for general access (and two-way authentication) and the other for exclusive access from a Service Fabric cluster also on Azure. To achieve the exclusive access I changed the inbound rule on the VMs to only allow the SF Cluster to access. The problem is that I see often that the rule is reset to default, presumably because of a deployment that modifies the AKS service from Azure DevOps (although the LoadBalancer object never changes)
The LoadBalancer configuration looks like this:
apiVersion: v1
kind: Service
metadata:
name: myservice-loadbalancer
spec:
ports:
- name: public-port
port: 1234
targetPort: public-port
- name: service-fabric-port
port: 4321
targetPort: service-fabric-port
selector:
app: myservice
type: LoadBalancer
A possible workaround is to add the allowed IP to the LoadBalancer object, as recommended here: https://github.com/Azure/AKS/issues/570#issuecomment-413299212, but in my case that will limit the "public-port" also.
I cannot think on a different way out than creating two LoadBalancer objects, one per port. But it is not a clean workaround as: the service is the same only through two different ports, and this way I would have two IPs for the same service. Also and as mentioned on the link above changes to the inbound rules should be persistent.
* UPDATE * I change the inbound rule in the Network security group created for the aks cluster, aks-agentpool--nsg. These are the rules reset periodically by a culprit I cannot identify but I assume is the DevOps deployment.