0
votes

I would like to fit exponential curves to vertical temperature profiles datasets. Sometimes the exponential shape is positive and other times negative depending on air temperature conditions. In the end I would like to fit each curve and then extract the gradient and intercept for a series of individual temperature profiles (make a loop of the fitting and extraction of parameters)

I have tried several suggestions on stackoverflow and Google and cannot move past the "initial guess" step... Any help would be appreciated.

My latest and most promising attempt is below (taken from an example on stackoverflow - (whuber):

My data:

Temps1<-c(284.1875, 285.6550, 286.2342, 286.9142, 287.7900,        
290.3492,295.2517, 298.1608)
Temps2<-c(275.6958, 275.0583, 274.7858, 274.4458, 273.9900, 273.1675, 
272.3225, 271.5875)
Depths<-c(-100,-70,-56,-42,-28,-14,0,7)

d <- data.frame(x = Temps1, y = Depths)
c.0 <- min(d[,1]) * 0.5
model.0 <- lm(log(Temps1) - c.0 ~ Depths, data=d)
start <- list(a=exp(coef(model.0)[1]), b=coef(model.0)[2], c=c.0)
model <- nls(d[,1]~ a * exp(b * Depths) + c, data = d, start = start)

I am stuck with the error: "Error in nlsModel(formula, mf, start, wts) : singular gradient matrix at initial parameter estimates In addition: Warning messages: 1: In min(x) : no non-missing arguments to min; returning Inf 2: In max(x) : no non-missing arguments to max; returning -Inf"

1
this initial guess seems to work list(a=1, b=1, c=1) - <but not a terribly constructive comment> Also you need to pass the variable name rather than in matrix from o: nls(x ~ a * exp(b * y) + c, data = d, start = list(a=1, b=1, c=1)) - user20650

1 Answers

0
votes

Suggest you use the "plinear" algorithm. Note that in the output .lin1 is a and .lin2 is c. You don't need starting values for the linear parameters:

> nls(x ~ cbind(exp(b * y), 1), d, alg = "plinear", start = list(b = coef(model.0)[2]))
Nonlinear regression model
  model: x ~ cbind(exp(b * y), 1)
   data: d
        b     .lin1     .lin2 
  0.03831  10.42913 284.57042 
 residual sum-of-squares: 1.061

Number of iterations to convergence: 7 
Achieved convergence tolerance: 3.572e-06