1
votes

I am using lmer from lme4 package to calculate confidence interval for variance component .

When I fit the model there is warning messages :

fit <- lmer(Y~X+Z+X:Z+(X|group),data=sim_data)
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  unable to evaluate scaled gradient
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge: degenerate  Hessian with 1 negative eigenvalues

I searched a lot to understand why does the error occur and finally come to a decision that there is difference between error and warning in the R world .

I want to compute confidence interval for the model parameters and run the code which shows error :

 confint.merMod(fit,oldNames=FALSE)
 Computing profile confidence intervals ...
 Error in if (all(x[iu <- upper.tri(x)] == 0)) t(x[!iu]) else t(x)[!iu] : 
 missing value where TRUE/FALSE needed

Is there another way to obtain CI of random effects with lmer?

EDIT :

simfun <- function(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1){
    N <- sum(rep(n_j,J))  
    x <- rnorm(N)         
    z <- rnorm(J)        

    mu <- c(0,0)
    sig <- matrix(c(sig2_0,sig01,sig01,sig2_1),ncol=2)
    u   <- rmvnorm(J,mean=mu,sigma=sig)

    b_0j <- g00 + g01*z + u[,1]
    b_1j <- g10 + g11*z + u[,2]

    y <- rep(b_0j,each=n_j)+rep(b_1j,each=n_j)*x + rnorm(N,0,0.5)
    data <- data.frame(Y=y,X=x,Z=rep(z,each=n_j),group=rep(1:J,each=n_j))
  } 

noncoverage <- function(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1){
    dat <- simfun(J,n_j,g00,g10,g01,g11,sig2_0,sig01,sig2_1)
    fit <- lmer(Y~X+Z+X:Z+(X|group),data=dat)
 }

comb1 = replicate(1000,noncoverage(10,5,1,.3,.3,.3,(1/18),0,(1/18)))
comb26 = replicate(1000,noncoverage(100,50,1,.3,.3,.3,(1/8),0,(1/8)))
4
If you get convergence warnings you can expect trouble in some subsequent analyses. confint is one of the functions that usually don't work in such a case. You should be wary and shouldn't trust this model. Investigate why it didn't converge.Roland
we probably need to figure out the root cause of the warnings (although good for you for understanding the error/warning distinction). Have you read the ?convergence manual page, especially the stuff below "If you do see convergence warnings" ?) Can you post more info on your problem (i.e. reproducible example)?Ben Bolker

4 Answers

3
votes

It depends on what you are looking for from the confidence intervals exactly, but the function sim in the arm package provides a great way to obtain repeated samples from the posterior of an lmer or glmer object to get a sense of the variability in the coefficients of both the fixed and random terms.

In the merTools package, we've written a wrapper that simplifies the process of extracting these values and interacting with them:

library(merTools)
randomSims <- REsim(fit, n.sims = 500)
# and to plot it
plotREsim(REsim(fit, n.sims = 500))

There are a number of other tools for exploring these in merTools. If you want the actual resulting simulations though, you're better off using arm::sim.

1
votes

A lmer model shows the results in a matrix. You can access the estimates and standard errors of the model to calculate the confidence intervals.

Since the first row is the estimate of the intercept of the model, the second row is the one that shows the estimate of your manipulated variable. The first column is the estimate of the effect, and the second column is the standard error.

So changing MyModel for the name of your model and rounding the number to two, round(coef(summary(MyModel))[2,1],2)-round(coef(summary(MyModel))[2,2],2)*2 would give you the lower end of the confidence interval, while simply changing the subtraction for an addition in the previous formula would give you the upper end of the estimate: round(coef(summary(MyModel))[2,1],2)+round(coef(summary(MyModel))[2,2],2)*2

1
votes

You could use the parameters-package, which offers you methods to compute CIs for fixed effects, including Kenward-Roger approximation for small sample sizes, or standard errors for fixed and random effects, or complete model summaries. See examples below.

library(parameters)
library(lme4)
#> Loading required package: Matrix
model <- lmer(mpg ~ wt + (1 | gear), data = mtcars)

ci(model)
#>     Parameter CI   CI_low   CI_high
#> 1 (Intercept) 95 31.89749 40.482616
#> 2          wt 95 -6.29877 -3.791242

ci_kenward(model)
#>     Parameter CI    CI_low   CI_high
#> 1 (Intercept) 95 31.208589 41.171513
#> 2          wt 95 -6.573142 -3.516869

standard_error(model)
#>     Parameter        SE
#> 1 (Intercept) 2.1901246
#> 2          wt 0.6396873

standard_error(model, effects = "random")
#> $gear
#>   (Intercept)
#> 3   0.6457169
#> 4   0.6994964
#> 5   0.9067223

model_parameters(model, df_method = "satterthwaite")
#> Parameter   | Coefficient |   SE |         95% CI |     t |    df |      p
#> --------------------------------------------------------------------------
#> (Intercept) |       36.19 | 2.19 | [31.90, 40.48] | 16.52 | 13.85 | < .001
#> wt          |       -5.05 | 0.64 | [-6.30, -3.79] | -7.89 | 21.92 | < .001

Created on 2020-02-07 by the reprex package (v0.3.0)

0
votes

The problem here is that your random effect is (X|group) and I assume it should be (1|group), the random intercept model or include both as (1+X|group). Having just (X|group) will cause problems as you are allowing varying slopes, but not allowing the intercept to vary.