1
votes

I'm currently implementing a service using WCF, which controls operations to a CRM database. I've implemented several aspects of CQRS to separate all operations into distinct Command and Queries.

Question 1:

Several of the Command Operations perform a relatively simple operation, but trigger a secondary, often more expensive operation. I've separated these triggers, but they still execute within the original operation thread.

I was thinking of separating these into a second WCF service, with one-way operations. That way the original operation can return immediately, and the "Trigger" service can handle all of these secondary operations. Currently reliability is not a major issue, so issue logging would suffice - but could be extended in the future.

Is this the typical way for handling such a scenario? Or is there a way this could be completed without using a secondary service?

Question 2:

Are there significant performance improvements from dividing WCF services like this on the same server?

My thinking is that core WCF Service's application pool will be freed up quicker (although the new pools will be competing), with the possibility of separating additional services onto separate servers.

3

3 Answers

1
votes

I agree with oleksii that a queue may be a better option than a WCF service if you're looking to scale. It will also allow you to move your commands to be one-way and really just return to the user a basic ack/nack that the command was received and is being processed (not that it was processed).

I'd suggest you look at Greg Young's intro to CQRS document on his CQRS Info site, specifically in the Command Side section (about half-way down).

Getting to your questions:

  1. By moving to a queue or a WCF service that is all one-way, I think this solves your problem. Your one-way service receives the command from the client and then hands it to your domain to process it. Your domain would then fire off 0, 1 or many events based on what needs to be done -- these events could be calls to other WCF services in your application or they could be messages dropped on a queue picked up by your event handler. The key is that you're letting the business rules (domain) determine what events to raise. How they get handled specifically is an implementation detail (but I'd go with a queue or event store). Check out Jonathan Oliver's Event Store.
  2. I think the faster you can free up IIS threads (or threads in general) is a good thing; so if you divide your services up into synchronous and asynch, that may be good. I think if you're going the CQRS route, going one-way on your services to just return ack/nack messages is probably a good thing and may help you from a performance perspective since you'll probably free up threads faster to handle more requests.

I hope this helps. Good luck!!

1
votes

There's nothing stopping you from having synchronous commands and then having only some of the execution code run asynchronously without having to separate services.

You mentioned that you have separated your "triggers". I assume that means they run as separate methods? That being the case, you can easily run them in the background and let your main WCF thread return, without having to separate services. (The code below is NOT assuming event sourcing.)

public class MyCommandHandler
{
    public string Do(MyCommand1 command)
    {
        var myCrmObject = ... ; // load some object
        string message = myCrmObject.CriticalWork(command); // synchronous
        ThreadPool.QueueUserWorkItem(x => myCrmObject.OtherWork(command)); // async
        // async alternative 1
        //Action<MyCommand1> action = myCrmObject.OtherWork; // make into action delegate
        //action.BeginInvoke(null, command); // start action async, no callback
        // async alternative 1a - custom delegate instead of Action. same BeginInvoke call
        // async alternative 2 - Background Worker
        // async alternative 3 - new System.Threading.Thread(...).Start(command);
        return message; // will be here before OtherWork finishes (or maybe even starts)
    }
}

If you were using an event stream and event handlers to perform database writes and integrate with other systems, then this could be structured a bit better. You could have your individual event handlers worry about whether it should execute synchronously or not.

0
votes

Just a suggestion

  1. You can try a working queue, some services on the front end asynchronously add items. Other services on the back end dequeue items and process the work. State of the work is saved to the persistent storage where it can be queried.

  2. It's better to have several workers on both sides of the queue, but for any improvements, your server needs to be multi-core (multi-processor), so it will be able to truly parallelise work. You may need to actually try different options and see what's best.