1
votes

I have the next setup in Azure Resource Manager :

On both VMs from scale set I have deployed a test web app which uses SignalR 2 on .Net 4.5.2. The test web app uses Azure Redis cache as backplane . The web app project can be found here on github : https://github.com/gaclaudiu/SignalrChat-master.

During the tests I did notice that after a signalr connection is opened , all the data sent from the client, in the next requests, arrives on the same server from the scale set , it seems to me that SignalR connection know on which sever from the scale set to go.

I am curios to know more on how this is working , I tried to do some research on the internet but couldn't find something clear on this point .

Also I am curios to know what happens in the next case : Client 1 has a Signalr opened connection to Server A. Next request from the client 1 through SignalR goes to the Server B.

Will this cause an error ? Or client will just be notified that no connection is opened and it will try to open a new one?

5

5 Answers

3
votes

Well I am surprised that it is working at all. the problem is: signalr performs multiple requests until the connection is up and running. There is no guarantee that all requests go to the same VM. Especially if there is no session persistence enabled. I had a similar problem. You can activate session persistence in the Load Balancer but as you pointed out acting on OSI layer 4 will do this using the client ip (imagine all guys from the same office hitting your API using the same IP). In our project we use Azure Application Gateway which works with cookie affinity -> OSI Application layer. So far it seems to work as expected.

1
votes

I think you misunderstand how the load balancer works. Every TCP connection must send all of its packets to the same destination VM and port. A TCP connection would not work if, after sending several packets, it then suddenly has the rest of the packets sent to another VM and/or port. So the load balancer makes a decision on the destination for a TCP connection once, and only once, when that TCP connection is being established. Once the TCP connection is setup, all its packets are sent to the same destination IP/port for the duration of the connection. I think you are assuming that different packets from the same TCP connection can end up at a different VM and that is definitely not the case.

So when your client creates a WebSocket connection the following happens. An incoming request for a TCP connection is received by the load balancer. It decides, determined by the distribution mode, which destination VM to send the request onto. It records this information internally. Any subsequent incoming packets for that TCP connection are automatically sent to the same VM because it looks up the appropriate VM from that internal table. Hence, all the client messages on your WebSocket will end up at the same VM and port.

If you create a new WebSocket it could end up at a different VM but all the messages from the client will end up at that same different VM.

Hope that helps.

0
votes

On your Azure Load Balancer you'll want to configure Session persistence. This will ensure that when a client request gets directed to Server A, then any subsequent requests from that client will go to Server A.

Session persistence specifies that traffic from a client should be handled by the same virtual machine in the backend pool for the duration of a session. "None" specifies that successive requests from the same client may be handled by any virtual machine. "Client IP" specifies that successive requests from the same client IP address will be handled by the same virtual machine. "Client IP and protocol" specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.

0
votes

SignalR only knows the url you provided when starting the connection and it uses it to send requests to the server. I believe Azure App Service uses sticky sessions by default. You can find some details about this here: https://azure.microsoft.com/en-us/blog/azure-load-balancer-new-distribution-mode/ When using multiple servers and scale-out the client can send messages to any server.

0
votes

Thank you for your answers guys. Doing a bit of reading , it seems that azure load balancer use by default 5-touple distribution. Here is the article https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode The problem with 5-touple is that is sticky per transport session . And I think this is causing the client request using signalr to hit the same Vm in the scale set. I think the balancer interprets the signalr connection as a single transport session.

Application gateway wasn't an option from the beginning because it has many features which we do not need ( so it doesn't make sense to pay for something we don't use).

But now it seems that application gateway is the only balancer in azure capable of doing round robin when balancing traffic.