1
votes

I have a parametric estimation problem which can be solved by Non-Linear Least Squares optimization. I have an analytical model with two unknown parameters x[0] and x[1]. I have a vector of my measured data samples, using which I write the cost function.

function cost_function(x)
    measured = data_read()   # This is where I read my measured samples
    model = analytical_model(x)  # I calculate the values from my 
                                   analytical model method
    residual = abs.(measured - model)
    return residual
end

In LsqFit package in Julia, which has the Levenberg-Marquardt (LM) implementation, there is only the method curve_fit which takes in model (analytical_model() here), xdata, p, ydata and it passes this function to the LM optimizer. (i.e) model(xdata,p) - ydata becomes the residual. But for me, my measured and model are both complex numbers and that is the reason I have to return the abs.().

I have tried

import LsqFit
result = LsqFit.levenberg_marquardt(cost_function, rand(2)) 

but it requires the jacobian of my cost_function(x) as another argument. I do not know the Jacobian and I want my optimizer to calculate the Jacobian for me using Forward Difference approximation. Is there a way to do this in Julia?

1
Probably the simplest is to use github.com/JuliaNLSolvers/Optim.jl and return sum(residual) from your function. Note that Julia uses 1-based indexing, so probably you have x[1] and x[2].Bogumił Kamiński
@BogumiłKamiński, thanks for your response. I did try the Optim.jl package - they don't have Levenberg-Marquardt function implemented in this. The closest quadratic non-linear optimizer I found was NewtonTrustRegion() which does not work efficiently for me. Regarding the indexing, I am a python user and I am slowly shifting to Julia. But I did use x[1] and x[2] in my code though.. :)Vijay Karthik
Why do you strictly require L-M optimizer? In LsqFit.jl you have curve_fit function in which you do not have to pass Jacobian and can choose forward difference approximation via autodiff kwarg.Bogumił Kamiński
As I said earlier, I looked through the curve_fit code which does what I mention in the summary. In LsqFit package in Julia, which has the Levenberg-Marquardt (LM) implementation, there is only the method curve_fit which takes in model (analytical_model() here), xdata, p, ydata and it passes this function to the LM optimizer. (i.e) model(xdata,p) - ydata becomes the residual. But I want abs.(model(xdata,p) - ydata) to be input to the residual.Vijay Karthik
OK - now I get it. Sorry for missing the comment. I have posted what I think should work (assuming that you are working with Float64 values; if they are e.g. BigFloat then change the third argument accordingly).Bogumił Kamiński

1 Answers

2
votes

The following call should work:

LsqFit.lmfit(cost_function, [0.5, 0.5], Float64[])

(please let me know if it worked, [0.5, 0.5] is an example starting point)

Here is an example:

julia> function cost_function(x)
           measured = [2:2:20;] .+ 0.5im
           model = [1:10;] .* x[1] .+ x[2] .* im
           residual = abs.(measured .- model)
           return residual
       end
cost_function (generic function with 1 method)

julia> LsqFit.lmfit(cost_function, [0.5, 0.5], Float64[])
LsqFit.LsqFitResult{Array{Float64,1},Array{Float64,1},Array{Float64,2},Array{Float64,1}}([2.0, 0.5], [5.18665e-8, 5.36734e-8, 5.65567e-8, 6.03625e-8, 6.49286e-8, 7.01067e-8, 7.57714e-8, 8.18218e-8, 8.81784e-8, 9.47796e-8], [0.000658313 -0.00846345; 0.00131663 -0.00846342; … ; 0.00592487 -0.00846286; 0.00658318 -0.00846272], true, Float64[])

julia> LsqFit.lmfit(cost_function, [0.5, 0.5], Float64[], autodiff=:forward)
LsqFit.LsqFitResult{Array{Float64,1},Array{Float64,1},Array{Float64,2},Array{Float64,1}}([2.0, 0.5], [4.44089e-16, 8.88178e-16, 1.77636e-15, 1.77636e-15, 1.77636e-15, 3.55271e-15, 3.55271e-15, 3.55271e-15, 3.55271e-15, 3.55271e-15], [-1.0 -0.0; -2.0 -0.0; … ; -9.0 -0.0; -10.0 -0.0], true, Float64[])

and we see that a correct solution [2.0, 0.5] is returned.