I've got a producer-consumer situation where I have a remote message queue that needs to be periodically polled for new task messages and a local ExecutorService which executes the tasks. There are many of these task executors polling the MQ. The local executor will have a fixed number of threads with a relatively fixed throughput.
The problem though is that I don't want the primary message loop to just keep consuming remote messages if it can't process them, but I want it to be constantly working if there is more work to do. I want to have at least one task ready for each thread, but not too much more as to not starve other workers.
Classic Producer-Consumer. The problem is that the ExecutorService abstracts away some of the information that I need to know (total threads, number of busy threads, etc.). The actual number of threads is configured at startup when it creates the corresponding ExecutorService. This is then injected into my main worker loop along with the abstraction for the remote message queue.
I feel like I'm missing something obvious. Currently, I'm leaning towards a decorator pattern around the ExecutorService to track the counts. I'm curious is there a more elegant solution that any out there has used?