1
votes

in my project I'm trying to implement a Cortex-M0-like UART auto baud rate detection feature on a Cortex-M3, which unfortunately does not have this handy feature on-board.

The idea is that the master uses two sync bits (1-0 sequence) at the beginning of each sent frame which allow the slave to synchronize itself to the (unknown) baud rate by measuring the time between the two falling edges (of the start bit and the high bit).

Basically I got this working with a timer input-capture which is connected to the UART RX pin, but only if my initial baud-rate is close to the actual baud-rate used by the master (+/- 15%).

However, problems arise if I use a slave default baud-rate which is much lower or higher than the actual baud-rate used. I can still measure the duration of the first two (sync) bits, adjust my baud-rate, but still I can't synchronize with the end of the frame/stop bit and thus I'm losing data.

For example, if I set my slave's UART to 9600 B/s per default and my master is actually sending with a much higher baud-rate, let's say 230,400 B/s:

  • UART RX will detect the start bit and starts to sample with 9,600 B/s
  • At the same time, I will measure the time between the falling edge of the start bit and of the high-bit to detect the actual baud-rate
  • When the measurement is done correctly, I will adjust UART baud-rate to 230,400 B/s
  • BUT, at that time, UART RX will still be waiting for the first sample to take, since it expects that 1st data bit much later (as it would occur with 9,600 B/s). So after adjusting my baud-rate, there will still be 7 or 8 samples taken although in reality, the first two data bits are already over.

I've tested this with STM32F0's auto baud rate feature and there I can select any default baud-rate value without losing any data. So I guess, since M0-UART "knows" that two sync-bits have already passed it will only sample 6 more data bits after baud-rate measurement to keep synchronized with the stop-bit. But how can I achieve this behaviour manually?

I hope you can get my point, though it's somewhat hard to explain for me, and I appreciate your ideas!

2

2 Answers

1
votes

You have a very good start from what you posted. It was a little unclear to me what you meant by a frame, since it could be a single UART character (typically 10 bits), or if it was a multi-byte packet.

It sounds like the issue you're running into is that you are using the first two bits of the UART character to set the baud rate, and so if the difference between the UART's configured baud rate and incoming baud rate is great enough, the UART peripheral fails to recognize that there has been any data.

Generally, I wouldn't think to change baud rate in the middle of receiving a character, and I think the problem is that the UART is not adjusting as you expect.

I would suggest using a sync character. ASCII 'U' makes a good choice because it generates an alternating sequence of 1 and 0 on the serial line. Depending on your application needs, you may be able to send one sync character to determine the initial baud rate, or you could send one before every data transfer if the baud rate is expected to change significantly. If you are just compensating for fluctuations in speed, you could use a hybrid approach where you set a baud rate based on a single sync character, and then tune that baud rate on every other character received.

For a single sync, your process would look a bit like this:

  1. Disable UART RX, enable input capture
  2. Wait until your input capture collects all the edges of the sync character.
  3. Calculate and set baud rate.
  4. Enable UART RX.

For a repeated sync, you would have the same process as above, but when the receiver went idle (this would need to be predetermined on both transmitter and receiver), it would return to step 1.

For a single sync and constant tuning, you would perform the same steps as above, but you would continue using input capture. When a character is received, you would use your first and last input capture values since the last complete character (whole character timing, 10 bits typically) to calculate a new baud rate, and set it, then clear your stored input capture values to adjust for the next character.


Alternatively, based on your comment, it sounds like you cannot use a sync byte, so I think your solution, if the microcontroller's UART peripheral is not able to handle the method on its own, would be to implement the receiver entirely with the timer module. Here's how I would do it, roughly:

  1. Use one channel of the timer as input capture to trigger on either edge.
  2. In a buffer of edge information (time and direction) 10 long (maximum of 10 transitions), on each interrupt record the counter value and direction of edge.
  3. After receiving your first two transitions (three if you want to use the two sync bits instead of start bit and one sync bit), you have the counter values needed to calculate a baud rate.
  4. Based on the baud rate calculated in 3, set a second timer channel in capture compare to trigger an interrupt 10 (or more, if you're using 2 stop bits) bit times from the first edge detected.
  5. When the capture compare channel fires an interrupt, you now have in your buffer the time and direction of each edge and you should be able to reassemble that into a byte.
  6. Disable the capture compare channel, and reset your index in the edge buffer.

If you're running in a Cortex-M3, even at a modest 32 MHz, you should have plenty of time to operate at 200 kbps. You may need to cycle through a few edge buffers, as I'm not sure how much time will be needed to convert edges into bytes, and if you are sending bytes back to back, you'll want at least two buffers you can ping pong between.

The details of how steps 1-6 would be implemented are going to vary based on the microcontroller you are using, but pretty much any timer peripheral should be able to handle that. Limitations would potentially be having to change which edge you trigger on if it doesn't support triggering on both simultaneously, and reading the state of the pin to figure out what edge triggered the interrupt if the timer doesn't report which edge cause it. You'll also have to carefully choose your time base for the timer so that you have both enough resolution to time your bits at high rates and enough runway in your counter to time your bits without multiple wrap-arounds on low rates.

0
votes

I have use in my project such algorithm:

Exist two modes unsync and sync. Both devices (master and slave) have timer for sync mode. Whenever unsuccess exchange occurs (or timer expired) devices switch to unsync mode. In this mode master send only byte 0x7F to slave, and slave wait for sync byte with disabled USART and enabled timer input capture. When slave successful synchronize it send ACK byte to master, then both devices work in sync mode. Additionally slave send NACK if it already synchronized.

Byte 0x7F is 0 start bit, then from #7 to #1 1 data bit, then 0 #0 data bit and then 1 stop bit. This bit sequence allows to synchronize slave device by timer input capture.