I am working on a multi-client web-based application that analyses sensor data and shall invoke actions based on this data with a rule engine. Every client of this application has a set of environmental sensors (10s - 100s) and a set of rules to be evaluated every time the sensor values change (the sensor values are copied into a database). A basic set of rules will often be reused by different clients but the rules are individually parameterized (e.g. time dependant) for each client and every client has a different amount of sensors and rules, which can be configured individually. Some rules might even be specific to single clients.
I believe that drools might be a good choice for such an implementation - using drools guvnor to manage the rules for each client. Every client would have his own knowledge base and rule execution session.
I wonder if such an environment would scale and if there is a benchmark or real-world example where someone has used drools for such scenario.
Most benchmarks I could find assess different rule engines by their ability to perform rules on a growing number of facts. The amount of facts in my scenario would be relatively stable (per client) and scalability would rather be limited by the amount of clients and concurrent application of many knowledge bases and sessions.
Any comment about benchmarks or rule engine comparison regarding this scalability problem is welcome. I'd also be glad to hear about real-world implementations where every client has his own rules and dataset to work on.