We are seeing strange behavior in our development and production environments when using SignalR 2.2.0 and using the Microsoft ASP.NET SignalR Service Bus Messaging Backplane with certain Azure Service Bus Instances. Some service buses seem to corrupt and become clogged, and we see the problems described below.
First, here is my OWIN startup code:
public void Configuration(IAppBuilder app)
{
string connectionString = System.Configuration.ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
GlobalHost.DependencyResolver.UseServiceBus(connectionString, "MyApplicationName");
// Branch the pipeline here for requests that start with "/signalr"
app.Map("/signalr", map =>
{
map.UseCors(CorsOptions.AllowAll);
var hubConfiguration = new HubConfiguration
{
//EnableJSONP = true,
EnableDetailedErrors = true
};
map.RunSignalR(hubConfiguration);
});
}
The symptoms of our problem is an intermittent failure for SignalR to connect using any transport. Performance is slow compared to running with no backplane, and with verbose logging enabled on the SignalR client, I see the message "SignalR: webSockets transport timed out when trying to connect." SignalR then tries to go through the rest of the transports (Forever frames, long polling) and then gives up.
Here's the kicker: for some of our service bus instances, the performance is rock solid and we never have issues. Other service bus instances cause the problems described above.
Why do we have multiple service buses? We only use one for our app, but developers each have a service bus instance to play with. It keeps me up at night that an Azure Service Bus has corrupted and I don't know why.
Questions:
- Has anyone else experienced this problem?
- Have you seen Service Bus instances corrupt or misbehave with SignalR or other applications?
- What could account for this behavior?