1
votes

I am replacing legacy thread pool with java given ThreadPoolExecutor. In legacy thread pool, 600 hundred threads are created on start up. But in ThreadPoolExecutor, using concept of core threads, max threads and prestartAllCoreThreads(), number of threads on start up can be limited.

Now,

  1. If fewer than corePoolSize threads are running, the Executor prefers adding a new thread rather than queuing. 2 )If corePoolSize or more threads are running, the Executor prefers queuing a request rather than adding a new thread.
  2. If a request cannot be queued, a new thread is created unless this would exceed maximumPoolSize, in which case, the task will be rejected.

1st scenario is OK, but what I want is, when core threads are being utilized, instead of tasks getting queued up(even in case of bounded queue, say size 100) and waiting for core threads to be idle or queue to be full, a new thread be created from non-core pool quota. As in real-time, my application can not stand with idea of task waiting in queue.

so what I want is CoreThreads -> Non-CoreThreads -> Queue instead of CoreThreads -> Queue -> Non-CoreThreads.

i.e if core threads are being utilized, create new threads, and if pool size is maximum, then task should go in queue and wait for any thread to be free.

Now, one way of this is to extend ThreadPoolExecutor class and override execute method, but then I have to almost copy complete class. This is a dirty way I could think of. Can anyone suggest of any other way.

Note : I can not use cachedThreadPool, as number of threads need to be limited.

1
[Found solution to the problem !!][1] [1]: stackoverflow.com/questions/19528304/…codingenious

1 Answers

3
votes

I see more a design issue than threading package issue here.

One use threading to either reduce delay or increase throughput. Given you are creating 600 threads, this is more a case of increase throughput on a server. However, any modern server don't have 600 CPU cores and you will suffer severely from context switch. It's easy and more efficient to have a fixed number of threads working on a set of queues.

If you really think your case is justified, simply create your own interface wraps a standard thread pool and have a bit custom logic on launching on a separate thread. However, I really doubt this would increase your system performance.

Essentially, unless really, really justified, I don't think creating new thread is a good solution than queuing in real-time systems.