0
votes

I am implementing a protocol decoder which receives bytes through UART of a microcontroller. The ISR takes bytes from the UART peripheral and puts it in a ring buffer. The main loop reads from the ring buffer and runs a state machine to decode it.

The UART internally has a 32-byte receive FIFO, and provides interrupts when this FIFO is quarter-full, half-full, three-quarter full and completely full. How should i determine which of these interrupts should trigger my ISR? What is the tradeoff involved?

Note - The protocol involves packets of 32-byte (fixed length), send every 10ms.

2
This questions should be moved to electronics.stackexchange.com - Swanand
The question is about how software should handle this case. I think this question can be answered here in stackoverflow. - SRK
@Swanand It is on-topic on either site. - Lundin
@SreekeshSreelal When posting questions like this on SO, make sure to include the embedded tag. That will draw attention to your question from the right kind of people. I have updated the tags. - Lundin
Thanks @Swanand for the correct guidelines! - SRK

2 Answers

4
votes

This depends on a lot of things, most of all the maximum baudrate supported, and how much time your application needs for executing other tasks.

Traditional ring buffers work on byte-per-byte interrupt basis. But it is of course always nice to reduce the number of interrupts. It probably doesn't matter much how often you let it trigger.

It is much more important to implement a double-buffer scheme. You should of course not start to run a state machine decoding straight from a single ring buffer. That will turn into a race condition nightmare.

Your main program should hit the semaphore/disable the UART interrupt, then copy the whole buffer, then allow interrupt. Ideally buffer copy is done by changing a pointer, rather than doing a hard copy. The code doing this needs to be benchmarked to perform faster than 1/baudrate * 10 seconds. Where 10 is: 1 start, 8 data, 1 stop, assuming UART is 8-N-1.

If available, use DMA over software ring buffers.

1
votes

Given a packet based protocol and a UART that interrupts when more than one byte has been received, consider what should happen if the final byte of a packet is received but that final byte isn't enough to fill the FIFO past the threshold and trigger an interrupt. Is your application simply not going to receive that incomplete packet until some subsequent packet is received and the FIFO finally fills enough? What if the other end is waiting for a response and never sends another packet? Or is your application supposed to poll the UART to check for lingering bytes remaining in the UART FIFO? That seems overly complicated to both use an interrupt and poll for received bytes.

With the packet-based protocols I have implemented, the UART driver does not rely on the UART FIFO and configures the UART to interrupt when a single byte is available. This way the driver gets notified for every byte and there is no chance for the final byte of a packet to be left lingering in the UART's FIFO.

The UART's FIFO can be convenient for streaming protocols (such as audio or video data). When the driver is receiving a stream of data then there will always be incoming data to keep filling the FIFO. The driver can rely on the UART's FIFO to buffer some data. The driver can be more efficient by processing multiple bytes per interrupt and reducing the interrupt rate.

You might consider using the UART FIFO since your packets are a fixed length. But consider how the driver would recover if a single byte is dropped due to noise or whatever. I think it's still best to not rely on the FIFO for packet-based protocols regardless of whether the packets are fixed length.