0
votes

I'm testing NServiceBus with Azure Queue backend. I configured NServiceBus using all the default settings and have this code to send a message:

while ((data = Console.ReadLine()) != null)
{
   Stopwatch sw = new Stopwatch();
   sw.Start();
   Bus.Send("testqueue", new Message() {Data = data});
   sw.Stop();
   Console.WriteLine("Sent time: " + sw.ElapsedMilliseconds);
}

When running on my dev machine, it takes ~700ms to send a message to the queue. The queue is far away, ~350ms when writing directly using Azure Storage client.

Now I have two questions:

  1. I don't want the thread to block on the Bus.Send call. One option is to the use async\await pattern. Another option is to have an in memory queue for delivering messages, similarly to 0MQ. The last option doesn't guarantee delivery of-course, but assuming there are some monitoring capabilities, I can live with that.
  2. Why does sending a message take twice the time of a simple write to the queue? Can this be optimized?
1

1 Answers

0
votes

What is the size of the data property?

I just ran this test myself (using the string "Whatever" as data) and I see an average latency of ~50ms for every remote send, with a throttle every 15 seconds making the calls take around ~300ms at that point (this is expected).

Do note that azure storage is a remote http based service and is therefore subject to latency due to distance, it has no published performance targets here either as far as I know. Furthermore it has active throttling in place to push back when data is being moved around, which happens roughly every 15 seconds (See my storage internals talk to understand what is going on behind the scenes. http://www.slideshare.net/YvesGoeleven/azure-storage-deep-dive)

On the topic of async/await. If your purpose is to unblock the UI thread, than go ahead and do it this way...

await Task.Factory.StartNew(() => _bus.Send(new Message{
       Whatever = data
})).ConfigureAwait(false);

If your purpose is to achieve a higher throughput, you should use more sending threads instead, as a thread needs to be waiting for the http response anyway, which is either the sending thread or the background thread fired from async/await. Do note however that every queue is also throttled individually (at several hundreds of msgs/sec) no matter how many sending threats you use

PS: IT's also advised to change following settings on the .net servicepoint manager, to optimize it for lots of small http requests

ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = 48;

Hope this helps...