0
votes

I'm hoping to work up a very basic audio effects device using a RISC-V GD32VF103CBT6 Development Board. I have managed to do some hardware-interrupt-based sampling with another MCU, but I'm a bit confused by the documentation for the RISC-V board. Chapter 11 in the user manual. I haven't the slightest idea how to turn the instructions there into actual C/C++ code. Sadly, their github repo has almost no examples at all, and none appear to deal with high speed sampling. There's also a datasheet in this github repo but I haven't been able to find any specific code examples or revealing instruction in there, either.

What I want to do is:

  • Perform the calibration described in the user manual, which must precede sampling operations.
  • collect 12-bit audio samples of audio signal voltage off an external pin using its oversampling capability to sum numerous 12-bit samples into a single 16-bit sample at a high sampling rate. Ultimately I want audio sampled with 16-bits at 48khz-96khz.
  • I need help instructing the MCU to collect these samples using its built-in hardware features.
  • I want to continuously sample, offloading as much as possible to built-in hardware functions so I can leave enough processing overhead left to do a bit of signal processing for simple effects.

Section 11.4.1 clearly says

Calibration should be performed before starting A/D conversion. The calibration is initiated by software by setting bit CLB=1. CLB bit stays at 1 during all the calibration sequence. It is then cleared by hardware as soon as the calibration is completed. The internal analog calibration can be reset by setting the RSTCLB bit in ADC_CTL1 register.

Calibration software procedure:
1) Ensure that ADCON=1.
2) Delay 14 ADCCLK to wait for ADC stability
3) Set RSTCLB (optional)
4) Set CLB=1.5.Wait until CLB=0.

Question 1: How do I set these memory registers as these instructions indicate. I need a code example, and the manufacturer provides none.

Question 2: How do I delay 14 ADDCCLK in C/C++. Seems like a loop would be enormously inefficient. Should I call sleep()? Any explanation of ADDCCLK also helpful.

This also seems important, but I have no idea what it portends:

The ADCCLK clock provided by the clock controller is synchronous APB2 clock. The RCU controller has a dedicated programmable prescaler for the ADC clock.

I am not at all certain but I think this is the conversion mode I want:

Continuous conversion mode

This mode can be run on the regular channel group. The continuous conversion mode will be enabled when CTN bit in the ADC_CTL1 register is set. In this mode, the ADC performs conversion on the channel specified in the RSQ0[4:0]. When the ADCON has been set high, the ADC samples and converts specified channel, once the corresponding software trigger or external trigger is active. The conversion data will be stored in the ADC_RDATA register.

Software procedure for continuous conversion on a regular channel. To get rid of checking, DMA can be used to transfer the converted data:

1.Set the CTN and DMA bit in the ADC_CTL1 register
2.Configure RSQ0 with the analog channel number
3.Configure ADC_SAMPTx register
4.Configure ETERC and ETSRC bits in the ADC_CTL1 register if in need
5.Prepare the DMA module to transfer data from the ADC_RDATA.
6.Set the SWRCST bit, or generate an external trigger for the regular group
1
@KamilCuk I have edited the post to remove the comment. It's pretty remarkable how finicky SO and SE are.S. Imp
If someone votes to close a question, isn't it encouraged to give a reason why the question should be closed?S. Imp
how to set the registers and collect the samples on this specific platform Everything is in the manual. 11.8.ADC registers shows the register addresses. You can control them manually by dereferencing addresses, ex. *(volatile uint32_t*)0x40012404 |= 1 << 23; would set RWDEN bit of ADC_CTL0 register of ADC0 (if I'm counting right). You just write to specified memory address - this is how you communicate with hardware. Your question is very broad - you ask how to control registers, how to design your whole application, how to choose ADC mode and how to delay. Research the datasheet.KamilCuk
Usually vendors distribute whole HAL to communicate with hardware stuff and for abstraction. Searching github for that mcu results in a lot of hits, including many HAL written for rust seems popular, but also ag whole standard library for that mcu by risc-mcu. Inside you can find examples and a calibration function for example. Github if full of code.KamilCuk
I recommend removing DMA from this question. You definitely want to use it for high-speed sampling, but It's going to be a question or two by itself.user4581301

1 Answers

0
votes

ADCCLK refers the input clock of the ADC. May be take a look at your datasheet. The most µC have a block diagram of the clock architecture of the µC usually there is a main system clock and then the different peripherals have a prescaler that you can program and which divide the system clock by some power of 2.

so 14 ADCCLK cycles mean that its not 14 CPU cycles but 14 ADC-Input-Clock edges. For example if the ADC prescaler is set to 64 then you have to wait 64*14 CPU clock cycles.

How to wait at all:
Mostly (I do not know if such a thing is present on your device) peripherals have a busy flag that is set as long the current operation is ongoing. So may be you can poll this flag (e.g. like while (ADC0_FLAGS & ADC_ISBUSY); ).

Another option may be checking if there is an interrupt that signals the completion of your operation. But at least for the calibration the simplest thing would be to start the calibration and just use a wait or delay function that just wastes a bit of time.

I personally would start the calibration on system start up and then doing other initialization stuff. May be delay at end of setup a few milliseconds to make sure all components on the board are powerd up correctly. After that the ADC should be already finished a long time.