0
votes

I did an experiment in which people had to give answers to moral dilemmas that were either personal or impersonal. I now want to see if there is an interaction between the type of dilemma and the answer participants gave (yes or no) that influences their reaction time. For this, I computed a Linear Mixed Model using the lmer()-function of the lme4-package. My Data looks like this:

  subject condition gender.b age    logRT   answer   dilemma   pers_force
1     105     a_MJ1        1  27 5.572154      1         1          1
2     107     b_MJ3        1  35 5.023881      1         1          1
3     111     a_MJ1        1  21 5.710427      1         1          1
4     113     c_COA        0  31 4.990433      1         1          1
5     115     b_MJ3        1  23 5.926926      1         1          1
6     119     b_MJ3        1  28 5.278115      1         1          1

My function looks like this:

lmm <- lmer(logRT ~ pers_force * answer + (1|subject) + (1|dilemma), 
    data = dfb.3, REML = FALSE, control = lmerControl(optimizer="Nelder_Mead"))

with subjects and dilemmas as random factors. This is the output:

Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: logRT ~ pers_force * answer + (1 | subject) + (1 | dilemma)
   Data: dfb.3
Control: lmerControl(optimizer = "Nelder_Mead")

     AIC      BIC   logLik deviance df.resid 
-13637.3 -13606.7   6825.6 -13651.3      578 

Scaled residuals: 
       Min         1Q     Median         3Q        Max 
-3.921e-07 -2.091e-07  2.614e-08  2.352e-07  6.273e-07 

Random effects:
 Groups    Name        Variance  Std.Dev. 
 subject   (Intercept) 3.804e-02 1.950e-01
 dilemma   (Intercept) 0.000e+00 0.000e+00
 Residual              1.155e-15 3.398e-08
Number of obs: 585, groups:  subject, 148; contrasts, 4

Fixed effects:
                     Estimate Std. Error t value
(Intercept)         5.469e+00  1.440e-02   379.9
pers_force1        -1.124e-14  5.117e-09     0.0
answer             -1.095e-15  4.678e-09     0.0
pers_force1:answer -3.931e-15  6.540e-09     0.0

Correlation of Fixed Effects:
            (Intr) prs_f1 answer
pers_force1  0.000              
answer       0.000  0.447       
prs_frc1:aw  0.000 -0.833 -0.595
optimizer (Nelder_Mead) convergence code: 0 (OK)
boundary (singular) fit: see ?isSingular

I then did a Likelihood Ratio Test using a reduced model to obtain p-Values:

lmm_null <- lmer(logRT ~ pers_force + answer + (1|subject) + (1|dilemma),
    data = dfb.3, REML = FALSE, 
    control = lmerControl(optimizer="Nelder_Mead"))
anova(lmm,lmm_null)

For both models, I get the warning "boundary (singular) fit: see ?isSingular", but if I drop one random effect to make the structure simpler, then I get the warning that the models failed to converge (which is a bit strange), so I ignored it. But then, the LRT output looks like this:

Data: dfb.3
Models:
lmm_null: logRT ~ pers_force + answer + (1 | subject) + (1 | dilemma)
lmm: logRT ~ pers_force * answer + (1 | subject) + (1 | dilemma)
         npar    AIC    BIC logLik deviance Chisq Df Pr(>Chisq)
lmm_null    6 -13639 -13613 6825.6   -13651                    
lmm         7 -13637 -13607 6825.6   -13651     0  1          1

As you can see, the Chi-Square value is 0 and the p-Value is exactly 1, which seems very strange. I guess something must have gone wrong here, but I can't figure out what.

1
For reasons I don't entirely understand, all of the variation in your response is explained by the intercept and the subject-specific intercept term. This would make sense if you had a single observation per subject but I see that you have (nearly) 4. Are there 4 distinct logRT values for each subject? Can you create a minimal reproducible example?Ben Bolker
Why is "dilemma" a random effect? If this is the "question" would you expect them to "on average" have no effect, but vary somewhat in between each other? As for your null-hypothesis, your random effects seem to have little to no effect (although a test would be needed to be sure). Have you tried fitting a simple linear regression, and seeing if the parameters are still insignificant (For example across groups using lme4::lmList)? Since you have a simple grouping, (subject) visualizing parameters across independent groups could lead to some insight in your data.Oliver
@BenBolker there are 4 dilemmas per person, and logRT is the average logarithmized reaction time across all those 4 dilemmas. Unfortunately, I am not sure how to create a minimal reproducible example...Carolin Schips
@Oliver dilemma is a random effect because the 4 dilemmas varied a bit in their specifics (like the exact scenario, number of people, etc), which is why I included it as a random effect. And thanks, I will try to fit a linear regression next!Carolin Schips
Then it makes sense to keep it as a random. I don't think a "Minimal reproducible example" would be anything other than providing the full dataset in this case (which I'll assume is not plausible). With this structure it seems "dilemma" is nested in "subject", but right now "dilemma" is not showing any effect as a random effect (value is numerically 0). All in all trying to see if the coefficients vary in standard regression models across groups (or whether a simple regression is sufficient) would be my next move.Oliver

1 Answers

0
votes

You say

logRT is the average logarithmized reaction time across all those 4 dilemmas.

If I'm interpreting this correctly — i.e., each subject has the same response for all of the times they are observed — then this is the proximal cause of your problem. (I know I've seen this exact problem before, but I don't know where — here? [email protected]?)

simulate data

library(lme4)
set.seed(101)
dd1 <- expand.grid(subject=factor(100:150), contrasts=factor(1:4))
dd1$answer <- rbinom(nrow(dd1),size=1,prob=0.5)
dd1$logRT <- simulate(~answer + contrasts + (1|subject),
                      family=gaussian,
                      newparams=list(beta=c(0,1,1,-1,2),theta=1,sigma=1),
                      newdata=dd1)[[1]]

regular fit

This is fine and gives answers close to the true parameters:

m1 <- lmer(logRT~answer + contrasts + (1|subject), data=dd1)Linear mixed model fit by REML ['lmerMod']
## Random effects:
##  Groups   Name        Std.Dev.
##  subject  (Intercept) 1.0502  
##  Residual             0.9839  
## Number of obs: 204, groups:  subject, 51
## Fixed Effects:
## (Intercept)       answer   contrasts2   contrasts3   contrasts4  
##    -0.04452      0.85333      1.16785     -1.07847      1.99243  

now average the responses by subject

We get a raft of warning messages, and the same pathologies you are seeing (residual variance and all parameter estimates other than the intercept are effectively zero). This is because lmer is trying to estimate residual variance from the within-subject variation, and we have gotten rid of it!

I don't know why you are doing the averaging. If this is unavoidable, and your design is the randomized-block type shown here (each subject sees all four dilemmas/contrasts), then you can't estimate the dilemma effects.

dd2 <- transform(dd1, logRT=ave(logRT,subject))
m2 <- update(m1, data=dd2)
## Random effects:
##  Groups   Name        Std.Dev. 
##  subject  (Intercept) 6.077e-01
##  Residual             1.624e-05
## Number of obs: 204, groups:  subject, 51
## Fixed Effects:
## (Intercept)       answer   contrasts2   contrasts3   contrasts4  
##   9.235e-01    1.031e-10   -1.213e-11   -1.672e-15   -1.011e-11  

Treating the dilemmas as a random effect won't do what you want (allow for individual-to-individual variability in how they were presented). That among-subject variability in the dilemmas is going to get lumped into the residual variability, where it belongs — I would recommend treating it as a fixed effect.