0
votes

In a multicore processor, there are ways to tell the particular application to run in either single core or 2 cores or 3 cores. Considering a scenario in which the application(having lot threads) is running on more than one core, how will a scheduler be able to determine the load(number of threads) on a particular core in a multicore processor and accordingly distribute(balance) the load(allocate threads) across the various cores ?

2
Schedulers that are multicore aware is a separate topic in itself.You might want to look at this:software.intel.com/sites/oss/pdfs/mclinux.pdf At a programmer's level, you must develop your algorithms such that parts that are not mutually dependent can be run in parallel(threads).Fast Fourier Transforms are a perfect example of paralleling computations.There are parallel libraries available that facilitate multi core programming.itisravi
Though i did not get the exact answer, it was a very good link.Karthik Balaguru

2 Answers

1
votes

In most schedulers, every CPU is an independent entity that examines the current state of the system and tries to find something useful to do. Picture the CPU as a workaholic -- it will always try to whatever can be done. A scheduler is not a "boss" that tells the CPUs what to do next, making sure everyone does a fair share. Rather, each CPU follows a scheduling algorithm in which it will examine the state of the system and try to find out how to do the most work it can.

The scheduling algorithm may have some provision for "thread affinity" which means a CPU will prefer to run a previously scheduled thread, since that thread is more likely to be in cache. Quite unlike network load balancing, however, scheduling algorithms are usually (but not always) concerned with keeping every CPU busy as possible, even if the workload ends up being unfair.

Why? If the workload is CPU-intensive, then every CPU will be able to run at close to 100%, and the workload will be fair. If the workload is I/O intensive and CPUs spend most of their time waiting for shared resources to become available, which is the normal case for a real world system, any load balancing strategy is likely at odds with simply working to release shared resources as quickly as possible.

A simple multi-CPU scheduler would include a queue of runnable threads and a list of blocked threads -- this queue and list being data structures shared by all CPUs, with accesses protected by locking. When a CPU enters the scheduler, it will select the highest priority runnable thread, and run that thread until the blocks or the alloted timeslice expires. If the thread blocks, it is placed on the list of blocked threads until it becomes runnable again. If the timeslice expires, the thread is placed in a deferred position in the runnable thread queue, and another thread is selected.

0
votes

In Linux you can use taskset -c ### ./executable where #### is a cpu list