1
votes

Important Update:

After a lot of research and experimentation I've learned a few things which have led me to re-frame the question. Rather than trying to find an "exponential regression", I'm really trying to optimize a non-linear error function with bounded input and potentially unbounded output.

So long as a function is linear, there exists a way to directly compute the optimal parameters to minimize the squared error terms (by taking the derivative of the function, locating the point where the derivative equals zero, then using this local minima as your solution). Many times when people say "exponential regression" they're referring to an equation of the form a * e^(b*x). Ths reason, described below, is that by taking the natural log of both sides this maps perfectly onto a linear equation and so can be directly computed in a single step using the same method.

However in my case the equation a * b^x does not map onto a linear equation, and so there is no direct solution. Instead a solution must be determined iteratively.

There are a few non-linear curve fitting algorithms out there. Notably Levenberg-Marquardt. I found a handful of implementations of this algorithm:

Unfortunately I tried all three of these implementations and the curve fitting was just atrocious. I have some sample data with 11,000 points for which I know the optimal parameters are a = 0.08 and b = 1.19, however these algorithms often returned bizarre results like a = 117, b = 0.000001 or a = 0, b = 3243224

Next I tried verifying my understanding of the problem by using Excel. An error function can be written defined as sum((y - y')^2) where y' is the estimated value given your parameters and an input x. The problem then falls to minimizing this error function. I opened up my data (CSV), added a column for the "computed" values, added another column for the squared error terms, then finally used Solver to optimize the sum of the error terms. This worked beautifully! I got back a = 0.0796, b = 1.1897. Plotting two lines on the same graph (original data and estimated data) showed a really good fit.

I tried doing the same using OpenOffice at first, however the solver built into OpenOffice was just as bad as the Levenberg-Marquardt experiments I did, and repeatedly gave worthless solutions. Even when I set initial values it would "optimize" the problem and come up with something far worse than it started.

Having proven my concept in Excel I then tried using optimization-js. I tried both their genetic optimization and powell optimization (because I don't have a gradient function) and in both cases it produced awful results.

I did find a question regarding how Excel's Solver works which linked to an ugly PDF. I haven't taken the time to read the PDF yet, but it may provide hints for solving the problem manually. I also found a Python example that reportedly implements Generalized Gradient Descent (the same algorithm as Excel), so if I can make sense of it and rewrite it to accept a generic function as input then I may be able to use that.

New Question (given all that):

How, preferably in JavaScript (though other languages are acceptable so long as they can be run on AWS Lambda), can I optimize the parameters to the following function to minimize its output?

data = /* ... read from CSV ... */;
let errorFunc = function([a, b]) {
    let sumSqErr = 0;
    data.forEach(pt => {
        const est = a * Math.pow(b, pt.x);
        const err = est - pt.y;
        sumSqErr += err * err;
    });
    return sumSqErr * 1e-7;
}

Old Question:

Given several thousand data points, I would like to perform least-squares regression analysis to compute a and b for an equation of the form:

y = a * b^x

Back in my college days I knew how to do this using matrix math, but I'm afraid that was 10 years ago and I've fallen out of practice. I began by reading up on computation of the least squares parameters using matrices from YouTube series such as https://www.youtube.com/watch?v=fb1CNQT-3Pg - however after scratching my head a bit at how to convert from a linear regression to an exponential one, I decided to see if someone else had already solved the problem for me and went to google to search for "JavaScript exponential regression"

This turned up two libraries:

However both of these libraries solve an equation of the form:

y = a * e^(b*x)

I'm wondering if someone can help me either convert the above form into my desired one (I can modify the y or x values of the input prior to performing the regression, for instance computing the natural log of each) or if someone can walk me through the basics of performing a regression like this without a third-party library (except, possibly, something like math.js to facilitate matrix operations)

Update

Thinking on it, I realize why every library solves y = a * e^(b*x) -- taking the natural log of both sides simplifies down to a basic linear regression:

ln(y) = ln(a * e^(b*x))
ln(y) = ln(a) + ln(e^(b*x))
ln(y) = ln(a) + b*x

Replacing ln(a) with a' and taking the natural log of all of our input y values before performing the regression analysis (we can call the new value y') reveals we're just doing linear regression at this point:

y' = a' + bx (solve for a' and b)

You can then compute e^(a') to recover a

Unfortunately the problem I'm trying to solve is more complex since the base of the exponent isn't constant. To "solve for" b it must be possible to get b on one side of the equation on its own. The best we can do is:

y = ab^x
(y/a) = b^x
(y/a)^(1/x) = b

This has us taking the "x-th root" of y. I'm not entirely sure how (or if) this would work in practice? Unlike the earlier example where I took the natural log of our y inputs (a common operation performed on every input), this has me performing a different operation on each input (one y I'd take the square root, another I'd take the cube root, another I'd take the fourth root, etc)

Is it possible that this regression can only be solved iteratively (e.g. via Newton's method)? If so, how would I go about solving it? Do I need to compute a complex derivative first?

1
One potentially disappointing finding I made was the following: solver.com/excel-solver-algorithms-and-methods-used -- the GRG2 algorithm employed by Excel (the only one that worked) may be copyrighted and very expensive to accessstevendesu

1 Answers

0
votes

First and foremost:

  • My equation is differentiable, I just wasn't sure how to differentiate it. The error function is a sum so the partial derivatives are just the derivatives of the inside of the sum, which is computed using chain rule. This means any nonlinear optimizer algorithm that requires a gradient was available to me
  • Many nonlinear optimizers have trouble when very small changes in inputs lead to massive changes in outputs, which can be the case with exponential functions. Tuning the damping parameters or convergence parameters can help with this

I was able to get a version of gradient descent to compute the same answer as Excel after some work, but it took 15 seconds to run (vs Excel ran Solver in ~2 seconds) -- clearly my implementation was bad

More Importantly

See: https://math.stackexchange.com/a/3850781/209313

There is no meaningful difference between e^(bx) and b^x. Because b^x == e^(log(b)*x). So we can use a linear regression model and then compute b by taking e to the power of whatever the model spits out.

Using regression-js:

const reg = require("regression");
const data = /* ... get data from CSV ... */;
const results = reg.exponential(data);
console.log("a: " + results.equation[0] + ", b: " + Math.exp(results.equation[1]));