2
votes

I have two regression models without random effects: one is OLS using lm, the other includes multiplication of coefficients using nle. I wish to add individual-level random effects to both. I've managed to do this for the OLS function using the lme4 package, but haven't been able to find a way to do it for the multiplicative model.

The following code produces a dataset with similar structure to the one I am working on:

df <- data.frame(id = rep(1:1000, each=10), jit = rep(rnorm(1000, 0, 0.2), each = 10), a = sample(1:5, 10000, T), b = sample(1:5, 10000,T), c = sample(1:5, 10000, T))
df <- cbind(df, model.matrix(~ as.factor(a) + as.factor(b) + as.factor(c), data.frame(rbind(as.matrix(df), t(matrix(rep(1:5, each = 5), nrow=5)))))[1:nrow(df),2:13])
colnames(df)[6:17] <- (dim_dummies <- as.vector(outer(2:5, letters[1:3], function(x, y) paste(y, x, sep=""))))
true_vals <- list(vL2 = 0.4, vL3 = 0.5, vL4 = 0.8, vA = 0.7, vB = 1.1, vC = 0.9)
attach(df)
attach(true_vals)
df$val <- 
  (a2 * vA + b2*vB + c2*vC) * vL2 + 
  (a3 * vA + b3*vB + c3*vC) * vL3 + 
  (a4 * vA + b4*vB + c4*vC) * vL4 + 
  (a5 * vA + b5*vB + c5*vC) + runif(1, -.2, .2) + jit
detach(true_vals)
detach(df)

df[1:15, ]
   id      jit a b c a2 a3 a4 a5 b2 b3 b4 b5 c2 c3 c4 c5     val
1   1 -0.14295 4 4 1  0  0  1  0  0  0  1  0  0  0  0  0  1.1698
2   1 -0.14295 5 1 4  0  0  0  1  0  0  0  0  0  0  1  0  1.1498
3   1 -0.14295 5 4 4  0  0  0  1  0  0  1  0  0  0  1  0  2.0298
4   1 -0.14295 5 1 5  0  0  0  1  0  0  0  0  0  0  0  1  1.3298
5   1 -0.14295 5 4 2  0  0  0  1  0  0  1  0  1  0  0  0  1.6698
6   1 -0.14295 1 5 1  0  0  0  0  0  0  0  1  0  0  0  0  0.8298
7   1 -0.14295 3 2 5  0  1  0  0  1  0  0  0  0  0  0  1  1.4198
8   1 -0.14295 3 2 1  0  1  0  0  1  0  0  0  0  0  0  0  0.5198
9   1 -0.14295 3 2 4  0  1  0  0  1  0  0  0  0  0  1  0  1.2398
10  1 -0.14295 5 3 3  0  0  0  1  0  1  0  0  0  1  0  0  1.4298
11  2 -0.01851 4 5 3  0  0  1  0  0  0  0  1  0  1  0  0  1.9643
12  2 -0.01851 2 1 3  1  0  0  0  0  0  0  0  0  1  0  0  0.5843
13  2 -0.01851 2 1 3  1  0  0  0  0  0  0  0  0  1  0  0  0.5843
14  2 -0.01851 1 1 1  0  0  0  0  0  0  0  0  0  0  0  0 -0.1457
15  2 -0.01851 2 3 1  1  0  0  0  0  1  0  0  0  0  0  0  0.6843

...

a, b, and c represent scores on three 1:5 dimension scales. a2 through c5 are dummy variables representing levels 2:5 on the same scales. There are 10 observations per individual (id). val is a proxy for the score I wish to predict using the regression models. (The values in the actual data may not correspond to the structure here, however.)

I have two regression models without random effects. One is a regular OLS using the 12 dummy variables as predictors of val:

additive.formula <- as.formula("val ~ 
  a2 + a3 + a4 + a5 + 
  b2 + b3 + b4 + b5 + 
  c2 + c3 + c4 + c5")
fit.additive <- lm(additive.formula, data = df)

The second assumes that the relative distance between the levels is shared for the three dinensions (a,b,c), but that the dimensions differ in terms of scale. That leaves 6 coefficients (cA, cB, cC, cL2, cL3, cL4) + the intercept.

multiplicative.formula <- as.formula(" val ~ intercept +
  (a2 * cA + b2*cB + c2*cC) * cL2 + 
  (a3 * cA + b3*cB + c3*cC) * cL3 + 
  (a4 * cA + b4*cB + c4*cC) * cL4 + 
  (a5 * cA + b5*cB + c5*cC)")
multiplicative.start <- list(intercept = 0, cA = 1, cB = 1, cC = 1, cL2 = 1, cL3 = 1, cL4 = 1)
fit.multiplicative <- nls(multiplicative.formula, start=multiplicative.start, data=df, control = list(maxiter = 5000))

Since there are 10 observations per individual, we cannot expect them to be fully independent. Therefore, I wish to add a random effect at the level of individual as defined by the variable id. I've found a way to do that with the lme4 package:

require(lme4)
additive.formula.re <- as.formula("val ~ (1 | id) +
  a2 + a3 + a4 + a5 + 
  b2 + b3 + b4 + b5 + 
  c2 + c3 + c4 + c5")
fit.additive.re <- lmer(additive.formula.re, data=df)

The question is if it is possible to add random effects on the id variable using a regression model similar to the multiplicative one, maybe with the lme4 or nlme packages? The formula should look something like

multiplicative.formula.re <- as.formula(" val ~ (1 | id) + intercept +
  (a2 * cA + b2*cB + c2*cC) * cL2 + 
  (a3 * cA + b3*cB + c3*cC) * cL3 + 
  (a4 * cA + b4*cB + c4*cC) * cL4 + 
  (a5 * cA + b5*cB + c5*cC)")

Any suggestions?

1

1 Answers

2
votes

Try nlme. This should be what you need (if I understood correctly):

library(nlme)
fit.multiplicative.nlme <- nlme( model = val ~ intercept +
                                   (a2 * cA + b2*cB + c2*cC) * cL2 + 
                                   (a3 * cA + b3*cB + c3*cC) * cL3 + 
                                   (a4 * cA + b4*cB + c4*cC) * cL4 + 
                                   (a5 * cA + b5*cB + c5*cC),
                                 fixed = intercept + cA +cB + cC + cL2 + cL3 + cL4 ~ 1,
                                 random = intercept ~ 1|id,
                                 start = unlist(multiplicative.start), data=df)

However, this didn't converge when I tried it with the non-reproducible data you provide (you should set a random seed). You could try different settings in nlmeControl.


The below was incorrect:

I don't see a reason for non-linear least squares. Let's revert the dummy encoding:

df$id1 <- seq_len(nrow(df))
df$a1 <- as.integer(rowSums(df[, paste0("a", 2:5)]) == 0)
df$b1 <- as.integer(rowSums(df[, paste0("b", 2:5)]) == 0)
df$c1 <- as.integer(rowSums(df[, paste0("c", 2:5)]) == 0)
library(reshape2)
DFm <- melt(df, id.vars = c("id", "jit", "a", "b", "c", "val", "id1"))
DFm <- DFm[DFm$value == 1,]
DFm$g <- paste0("fac", substr(DFm$variable, 1, 1))
DF <- dcast(DFm, ... ~ g, value.var = "variable")


fit1 <- lm(val ~ faca + facb + facc, data = DF)

#compare results:
coef(fit.multiplicative)
prod(coef(fit.multiplicative)[c("cA", "cL2")])
coef(fit1)["facaa2"]
prod(coef(fit.multiplicative)[c("cA", "cL3")])
coef(fit1)["facaa3"]

As you see, this is basically the same model (differences are due to numerical optimization within nls). And it's easy to add a random intercept to this.