I would be grateful if someone could offer an explanation of the effects of too high a tick rate in a RTOS or direct me to a resource that explains it clearly?
The context of the question... We are running with ucos-ii with a tick rate of 10000 (OS_TICKS_PER_SEC=10000). This is outside the recommended setting of 10 to 100. While investigating another issue I noticed this setting and flagged it as an anomaly. There is no detail in the ucos manual (I can see) explaining why this is the recommended range. We have 8 Tasks (plus interrupts) running with different priorities so I would assume the higher tick rate means we are switching in the highest priority task faster. In our case we would be ensuring that the user interface is addressed over some less important maintenance tasks.
From what I can see the consensus is to recommend against setting the tick rate in any RTOS "too" high due to "overhead". It seems common to suggest the use of interrupts with lower tick rates. Fair enough but, I am unclear on what are the detectable downsides as the tick rate increases. For example, the freeRTOS documentation states "The RTOS demo applications all use a tick rate of 1000Hz. This is used to test the RTOS kernel and is higher than would normally be required." It does go on to say that tasks of equal priority will switch often - this would lead to the kernel occupying a lot of the processing time which would be negative. I presume eventually this intended speeding up by increasing the tick rate would become negative as the Kernel consumes most of the processor time. Maybe this is the only answer I need. However in our case all tasks have a different priority so I do not think this is (as?) applicable.
Ultimately, I am trying to determine if our software is running with too high a tick rate or how close it is to some threshold. I think the intended benefit during development was to stabilise the user interface. I am hoping the answer is not entirely empirically based!