0
votes

Most examples about service fabric shows that after deployments, cluster endpoints appear magically just as given in service-manifest eg: <cluster-url>:port/api/home

Some people on forums mention about tweaking load balancer to allow access to port.

Why the difference in opinions? Which way is correct? When I tried I was never able to access a deployed api/endpoint in azure cluster(load balancer tweaked or not). OneBox worked though.

2
Azure Service Fabric cluster is created on Azure Scale Set, you can take a look from Deploy a Linux Service Fabric cluster.Charles Xu
how is that relevant to public port access?Blue Clouds
It uses the Azure Load Balancer to manage the access from the Internet.Charles Xu
yes, agree but for a SF application a load balancer is automatically configured and should we add probes or change anything there to make our application accessible to outside world?Blue Clouds
I think it would be managed with the service fabric CLI.Charles Xu

2 Answers

2
votes

The main detail most people forget when building SF applications is that they are building distributed applications, when you deploy one service in a cluster, you need to a way to find it, and it can move around the cluster in some cases, so the solution must be able to reflect these distributions.

It works locally because you have a single endpoint (localhost(127.0.0.1)>Service) and you will always find your application there.

On SF, you hit a domain that will map to a load balancer, that will map to a set of machines, and one of the machines might have you application running on it (domain>LB IP > Nodes > Service).

The first thing you need to know is:

  • Your service might not run on all nodes(machines) behind a load balancer, when the load balancer send a request to that node, if it fails, the LB does not retry on another node, and it forward these requests to random nodes, and in most cases it stick open connections to the same machine. If you need your service running on all nodes, set the instance count to -1 and you might see it working just by opening the ports on LB.

  • Each NodeType has one load balancer in front of it, so, always set a placement constraint on the service to avoid it start on other NodeType not exposed externally

  • Every port opened by your application does open on a node basis, if you need external access, it must be opened in the LoadBalancer manually, or via script, the ports assigned by SF to your service are meant to be managed internally to avoid port conflicts between services running on same node, SF does not open the ports in the LB for external access.

There are many approaches to expose these service, you also could try:

  • User a ReverseProxy, like the one bulti-in that will proxy the calls to your services apread in the cluster, does not matter where they are.
  • Use NGINX as an API Gateway or Reverse Proxy, and configure it to call only specific services, in this case you need to provide it with the services address, so you would need to refresh the list when services start or stop.
  • Use Azure API Management to Expose APIs hosted on SF
0
votes

enter image description here The custom endpoints help says "Custom endpoints allow for connections to applications running on this node type. Enter endpoints separated by a comma.". It can only be set while creating cluster.

So apparently without setting the port here, outside world can never access that port. Or so is what this user felt in 2016.

Since there is a loadbalancer in front and if we put a probe from port x(public) to port y(backend-pool) and if we open the port y in firewall in all the nodes then it should work too. How to open port.

What would happen if we mention a port(say 2345) in servicemanifest? Then a port is opened in firewall by SF for us and it looks like this. And if there is a probe in Loadbalancer pointing to 2345, then it should work.

enter image description here

So unless there is a probe manually set by us in load balancer, it should not work.