1
votes

I am working on rebuilding some distortion units from Reaktor based on Infinite Linear Oversampling which is a technique to reduce aliasing. It involves integrals of the distortion equations with unit delays. Here's an example of a schematic: https://www.native-instruments.com/forum/attachments/ilo-tanh-png.54931/

I'm wondering what the simplest way to write a function that delays output by one unit (Z-1).

I read Jordan Harris's posts here, but I'm not sure I follow his technique.

Here's what I'm wondering if might be the same idea:

double output = nullptr;

inline double getUnitDelay(float& input) {
return output;
input = output;
}

So in principle, it takes an input, but it doesn't return that input. It copies it to another variable called output, which I think needs to be initialized as nullptr so that there is something in it (ie. nullptr) for the first sample request. Not sure how this can be folded into the function.

Since C++ is order sensitive (I think), this function returns the output from the prior sample each time it is run.

Then for example, it can be used in equations like this:

integral - getUnitDelay(integral) ... ;

Would that work? Is there a better way to do it?

Thanks as always

1

1 Answers

0
votes

There's mainly a problem that the unit delay function is not tied to be used with the integral data, once you got it working.

 int delay1(int input) {
    static int previous=0;
    int outp = previous;
    previous= input;
    return outp;
 }

You should somehow add a context to the delay, which you will provide to function to be used with that particular variable.

  int delay_better(int inp, int *context){
     int out=*context;
     *context=inp;
     return out;
 }

The generic approach for N-unit delay would use typically a circular buffer, which are often even instruction-level assisted on DSPs.