0
votes

Background and Task: Consider a random sample of size n with a binary outcome Y_i. Assume Y_i ~ Bern(pi_i). Assume the probit link function pi_i=Phi(X_i^T beta).

Create an X matrix with 200 observations on three covariates, each with standard normal distribution. Generate a y vector assuming the probit model is correct for the data, i.e. choose some values of beta and use these with the X values and probit link function to generate a set of outcomes y.

Write a function in R to fit probit models, taking a vector responses y and a vector of covariates and an intercept, X. Run the model on these data and compare the coefficient estimate to the true values.

Code:

X=rnorm(200*3) # generate 200x3=600 random standard normal values
dim(X)=c(200, 3) # set the dimensions of X to be 200x3
X=cbind(1, X) # add a column of 1's for the intercept
X # print X
beta=c(1,4,2,3) # choose some values of beta
pi_i=pnorm(X%*%beta)
for (i in 1:200) { # generate y vector
  y[i]=rbinom(1, 1, pi_i[i])
}

loglik=function(par, X, y) {
  pi_est=pnorm(X%*%par) # probit link function
  ll=sum(y*log(pi_est)+(1-y)*log(1-pi_est)) # log likelihood for bernoulli sample
  return(ll)
}
opt.out=optim(par=c(0,0,0,0), fn=loglik, X=X, y=y, method="BFGS", control=list(fnscale=-1), hessian=TRUE) # error in this line

Issue: I'm getting the error

Error in optim(par = c(0, 0, 0, 0), fn = loglik, X = X, y = y, method = "BFGS", : non-finite finite-difference value [3]

Does anyone know why this is?

1
Thank you. I never would have caught thatJellyfish

1 Answers

1
votes

Using the data below when running the loglik function in the question can return NaN values. This is due to pi_est being numerically close to 1 hence the term log(1 - pi_est) equates to log(0) leading to infinite values.

par <- c(1, 4, 2, 3)
pi_est <- pnorm(X %*% par) 
ll <- sum(y* log(pi_est) + (1 - y)* log(1 - pi_est)) 
ll
# [1] NaN\

In particular it is the values of pi_est which are evaluated as 1 -- a numerical accuracy issue.

idx <- which(is.infinite(log(1 - pi_est)))
print(pi_est[idx], digits=21)
# [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

You can reduce the possibility of this happening by calculating the log of the integral within pnorm. Also notice that the normal CDF(-x) = 1 - CDF(x). Substituting log(pi_est) with pnorm(X %*% par, log.p=TRUE) and log(1 - pi_est) with pnorm(X %*% par, log.p=TRUE, lower.tail = FALSE) (which is equal to pnorm(-X %*% par, log.p=TRUE)) leads to a more numerically stable calculation.

loglik <- function(par, X, y) {
   lp = X %*% par
   pi_est1 = pnorm(lp, log.p=TRUE) 
   pi_est2 = pnorm(lp, log.p=TRUE, lower.tail=FALSE) 
   ll = -sum(y*pi_est1 + (1-y)* pi_est2)
   return(ll)
 }
opt.out <- optim(par=c(1,1,1,1), 
                 fn=loglik, X=X, y=y, 
                 method="BFGS",
                 hessian=TRUE) 
opt.out$par
# [1] 1.207355 4.585248 2.064004 3.430316

# Using `glm`
m = glm(y ~ X-1, family=binomial(link="probit"))
# Warning message:
# glm.fit: fitted probabilities numerically 0 or 1 occurred 
coef(m)
#       X1       X2       X3       X4 
# 1.207346 4.585221 2.063990 3.430295 

There is probably a way to avoid calculating the integral twice.


Data

set.seed(65819138)
X <- matrix(rnorm(200*3), ncol=3)
X <- cbind(1, X) 
beta <- c(1,4,2,3) 
pi_i <- pnorm(X%*%beta)
y <- rbinom(200, 1, pi_i)