What I'm trying to do is detailed below the diagram
The client can only connect to VM1 and VM2 through the standard Azure load balancer on port X, and not directly to the VM IP addresses
We have a rule in the LB to pass traffic on port X to the backend pool, which consist of VM1 and VM2
Sticky Sessions are set to Client IP
There is no health probe for this rule
Windows Firewalls on VM's are deactivated
NSG Rules Setup so far tried
Scenario1
netcat exposing port X on VM1 and VM2
NSG rule allowing traffic from Client IP to Load Balancer IP
DENY all inbound
From Client
telnet 10.100.23.4 X - Connection Failed
telnet 10.100.23.5 X - Connection Failed
telnet 10.100.23.6 X - Connection Failed
Scenario2
netcat exposing port X on VM1 and VM2
NSG rule allowing traffic from Client IP to Load Balancer IP on port X
NSG rule allowing traffic from Internal LB IP to entire subnet on all ports
DENY all inbound
From Client
telnet 10.100.23.4 X - Connection Failed
telnet 10.100.23.5 X - Connection Failed
telnet 10.100.23.6 X - Connection Failed
Scenario3
netcat exposing port X on VM1 and VM2
NSG rule allowing traffic from Client IP to Load Balancer IP on port X
NSG rule allowing traffic from Internal LB IP to entire subnet on all ports
NSG rule allowing traffic from Client IP to VM1 and VM2 IP
DENY all inbound
From Client
telnet 10.100.23.4 X - Connection Succeeded
telnet 10.100.23.5 X - Connection Succeeded
telnet 10.100.23.6 X - Connection Succeeded
Does anyone know of a combination of NSG rules I can use to deny direct access to VM1 and VM2 from the client, whilst allowing traffic to pass through the LB to the VM's?
I feel like I'm missing a trick here as this seems a pretty standard thing to do security wise.