3
votes

I would be grateful if someone could offer an explanation of the effects of too high a tick rate in a RTOS or direct me to a resource that explains it clearly?

The context of the question... We are running with ucos-ii with a tick rate of 10000 (OS_TICKS_PER_SEC=10000). This is outside the recommended setting of 10 to 100. While investigating another issue I noticed this setting and flagged it as an anomaly. There is no detail in the ucos manual (I can see) explaining why this is the recommended range. We have 8 Tasks (plus interrupts) running with different priorities so I would assume the higher tick rate means we are switching in the highest priority task faster. In our case we would be ensuring that the user interface is addressed over some less important maintenance tasks.

From what I can see the consensus is to recommend against setting the tick rate in any RTOS "too" high due to "overhead". It seems common to suggest the use of interrupts with lower tick rates. Fair enough but, I am unclear on what are the detectable downsides as the tick rate increases. For example, the freeRTOS documentation states "The RTOS demo applications all use a tick rate of 1000Hz. This is used to test the RTOS kernel and is higher than would normally be required." It does go on to say that tasks of equal priority will switch often - this would lead to the kernel occupying a lot of the processing time which would be negative. I presume eventually this intended speeding up by increasing the tick rate would become negative as the Kernel consumes most of the processor time. Maybe this is the only answer I need. However in our case all tasks have a different priority so I do not think this is (as?) applicable.

Ultimately, I am trying to determine if our software is running with too high a tick rate or how close it is to some threshold. I think the intended benefit during development was to stabilise the user interface. I am hoping the answer is not entirely empirically based!

2

2 Answers

10
votes

The scheduler runs on every tick, so if for example the scheduler take 10 microseconds to run and a tick occurs every 10ms, the scheduling overhead in the absence of any other scheduling events is 0.1%, if however the tick occurs every 100us, the overhead is 10%. In the extreme case the tick rate could be so high that you are always in the scheduler and never actually running tasks!

The actual scheduling overhead will depend of course on the processor speed. A faster processor will be able to cope with a faster tick, but there is no benefit to running a faster tick than your applications needs as it it eats CPU time that could be used for useful work. The recommendation of 10 to 100 probably relates to what is adequate for most systems; the aim being to be only as fast as necessary.

By spending more time in the scheduler than necessary, greater scheduling latency and jitter may occur for tasks that are scheduled on events other than timers or delays. If for example an interrupt occurs and the handler triggers a task; that task may be delayed when the interrupt occurs while the scheduler was already processing the tick.

A faster tick rate does not make anything run faster, it simply increases the resolution of timers and delays that may be used, but conversely it reduces the range. A 16 bit timer at 100us tick rate will roll-over after 6.55 seconds, while a 10ms tick will roll over after 10 minutes 55 seconds. If the timers are 32 bit, this is perhaps less of an issue.

You need to ask yourself what resolution (and possibly range) you need from timers and delays; it seems unlikely that you need 100us resolution if the UI is the "most important" task (although "importance" is an inappropriate method of priority allocation in a real-time system - so it is already ringing alarm bells!).

If you need higher resolution for just one task - signal sampling an ADC at the Niquist rate for example, then you will be better of using an independent timer for that perhaps? If it is set that fast to obtain timely response to polled events, then in that case you would do better to arrange for such events to generate interrupts.

I would assume the higher tick rate means we are switching in the highest priority task faster.

Not faster, possibly more frequently. The tick rate does not affect context switch time, and a task waiting on a timer or delay will run when the timer/delay expires. When the tick occurs, timers and delays are decremented and a context switch occurs when one expires. By having a faster tick, you simply increase the number of times the scheduler runs and decides to do nothing! Normally you would set timers and delays to a value that takes the tick rate into account, so that changing tick rate does not affect the timing of existing tasks.

0
votes

yes.it is simple i hope . when we the make the SCHEDULER to check more often for check some TASK .the SCHEDULER itself consumes more time and giving less time for the TASK to be running but responsiveness of the system is high,so processor has more work and consumes more power and time. By contrast,when we make the scheduler to check less often for the TASK responsiveness decreases and the time is available for the TASK to run and less cpu power is used. conclusion so based on our sensitivity needed for a system ,we should set the tick rate. i think 100 us will be good responsive.