2
votes

I have a single producer, single consumer problem which (I believe) can be solved using a circular/ring buffer.

I have a micro-controller running a RTOS, with an ISR(Interrupt Service Routine) handling UART (Serial port) interrupts. When the UART raises an interrupt, the ISR posts the received characters into the circular buffer. In another RTOS task (called packet_handler), I am reading from this circular buffer and running a state machine to decode the packet. A valid packet has 64 bytes including all the framing bytes.

The UART operates at 115200, and a packet arrives every 10ms. The packet_handler runs every 10ms. However, this handler may sometimes get delayed by the scheduler due to another higher priority task executing.

If I use an arbitrarily large circular buffer, there are no packet drops. But how to determine the optimal circular buffer size in this case? At least theoretically. How is this decision of buffer size made in practice?

Currently, I am detecting overrun of the buffer through some instrumentation functions and then increasing the buffer size to reduce packet loss.

4
Your packet handler must have a cycle time < 10ms. Remember that the 10ms on the receiver CPU aren't 10ms on the sender CPU and therefore they are asymptotically drifting away from each other (assuming that there is no other sync mechanism). If your packet_handler processes only one packet and your sender runs slightly faster than the receiver, you will always run out of memory, no matter how large the buffer. This is just Shannon in disguise :) - Vroomfondel
Ok. I can change the periodicity of packet_handler after taking into account the maximum clock drift of the sender with respect to receiver. Further i will reduce the time period of packet_handler a bit more so that the receiver is always running at a slightly faster pace than transmitter. Now is the problem solvable using a finite buffer? - SRK
Is the buffer in question a packet buffer or a character buffer? - Clifford

4 Answers

1
votes

You won't be safe, ever, as you are dealing with a stochastic process (according to your explanation). Answering your question: You will need an infinite buffer just in case the consumer task is in ready state for infinite seconds. So, you will have to change something in your initial approach:

  • Increase the priority of the consumer, in order to ensure the 10ms execution (the smallest buffer approach, but it may not be possible).
  • Try to get a better characterization of your model, in order to predict the maximum gap of time in which the consumer task won't be executed (do your system as predictable as possible).
  • Lose packages with a random buffer size (it may not be safe)
0
votes

I would calculate in this way:

64 Byte received just know
64 Byte still in the ring buffer
+100% to be save
===================
256 Byte Buffer

But this is just a guess. You had to do some worst case test with your buffer and then spend +100% to be save.

0
votes

While all of the above answers are correct and throws light on the issue, this page summarizes all the factors to be considered while choosing the size of a ring buffer.

Some queuing models can be used to theoretically analyze the problem at hand and find out the suitable size of ring buffer.

A more pragmatic approach is to start with a large buffer, then find out the maximum used buffer size in real test case (this process is called watermarking) and use this figure in the final code.

0
votes

It is simply a matter of determining the maximum possible delay - the sum of the execution times of all higher priority tasks that can run - and dividing that by the packet arrival interval.

This may not be straightforward, but can be simplified by making only the most deterministic tasks higher priority and moving less deterministic and longer running tasks to lower priorities according to rate-monotonic principles Tasks that frequently run for a short time, but sporadically run for longer are candidates for being split into two tasks (and further queues) to offload the longer execution to a lower priority.