I am trying to use R to evaluate an integral.
Here is the function I am trying to evaluate:
#Define the marginal likelihood function (which will we then integrate)
marginal = function(z,
se,
alpha,
beta,
noise){
log_result = log( 1 / ( ( z^-1*se - 4 )^.5 * z^-1*se ^ ( 3 / 2 ) ) ) +
log( ( dbeta( ( 0.5 - ( ( z^-1*se - 4 ) ^ .5 ) / ( 2 * ( z^-1*se ) ^ .5 ) ) , alpha, beta ) +
dbeta( ( 0.5 + ( ( z^-1*se - 4 ) ^ .5 ) / ( 2 * ( z^-1*se ) ^ .5 ) ) , alpha, beta ) ) ) +
dgamma( z, shape = 1 / noise, scale = noise, log = TRUE) +
log( z^-1 )
return(exp(log_result))
}
integrate( marginal, lower = 0, upper = 0.9584705, se = 3.833882, alpha = 4.568112, beta = 4.830437, noise = 1e-4)$value
However, doing this results in my integral being thought to be divergent. This isn't the case, and furthermore, for all values between 0 and my upper bound at ~0.96, the function can be evaluated.
I took a look at what the function looks like and here is an example:
Very interesting, if I decide to change the lower bound to something like 0.8, I now get a finite answer, 0.0001505723. Clearly what is happening is that because my function is essentially at zero for so long and only peaks at the boundary R is doing something weird here when calculating the integral.
Unfortunately because I am doing these integrals over different values of alpha/beta/noise/se etc, the extent of this lower bound changes, and sometimes the integral is peaked like it is in the attached image and sometimes it is peaked in the middle of the boundary (where the upper bound changes based on the data input) so I cannot simply choose to change the lower bound to always be 0.8. For computational reasons I would also prefer to not have to write a test to see where this lower bound should be, as that feels quite ad-hoc to me and not a suitable solution for my purposes.
Is there some other better way to evaluate this integral?