1
votes

Scenario 1 (working): Here the scenario that works fine: I have 2 publicly accessible stateless services Foo and Bar

  • Foo: 2 instances, listening on port 8081
  • Bar: 1 instance, listening on port 8082

Sending requests to either http://clusteruri:8081 or http://clusteruri:8082 works fine and I see that for Foo the requests are nicely distributed between the two nodes where the two instances are hosted.

Scenario 2 (NOT working): Here the scenario I would like to enable: again 2 stateless services Foo and Bar

  • Foo: 2 instances, listening on URI prefix http://+:8080/foo
  • Bar: 1 instance, listening on URI prefix http://+:8080/bar

Please note that both services listen on the same port, but different paths (achieved by using a host that is built on top of http.sys, like WebListener).

Here things start to get odd: It seems ASF/load balancer don't really understand it and think that all 3 nodes just listen on port 8080 resulting in some requests to Foo ending up on the node hosting Bar and the other way round.

It seems that ASF/load balancer are able to automatically handle scenarios where services listen on dedicated ports, but doesn't really support services listening on the same port (but different path).

My questions

  • Is there a way to get Scenario 2 working "out of the box" like it works for Scenario 1 (i.e. without implementing a custom app gateway service that does the routing)?
  • Can someone please shed some light on how ASF configures/communicates to get Scenario 1 working? I.e. where can I "see" that ASF configured the load balancer that requests to Foo should go to either Node0 or Node1 and requests to Bar should go to Node2 depending on what port the request is sent to?
1
Could you please share code of you communication listener? And how you create a proxy if you do so.cassandrad

1 Answers

2
votes

Service Fabric doesn't have a network load balancer. Service Fabric is just the clustering/orchestration/application platform that runs on a set of VMs. When you create a cluster in Azure through ARM (or the Azure portal), one of the resources you get in the standard ARM templates is the Azure Load Balancer, but it is a completely separate thing that only knows about the VMs you're deploying, not the services running on them. Configuration of the load balancer is something you do in your ARM template (or again through the Azure portal). Service Fabric doesn't know about the load balancer. It's the same topology you would expect if you set up a cluster on your own hardware.

The reason scenario 1 works and 2 doesn't in Azure is because the Azure Load Balancer is a layer-4 load balancer: It understands open ports but doesn't understand application protocols like HTTP and URLs, and doesn't understand that you have different applications on your nodes - all it sees is open ports. However, the ALB does allow you to set HTTP probes that tell the ALB which nodes to send traffic to.

Here's a little more background info that might be useful: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/