1
votes

I'm running a bayesian model in rjags, and I would like to be able to output a plot of the trace of the MCMC, the posterior distribution for my parameters (which I can already obtain from coda), and a comparison of the posterior vs. prior distributions.

Is there any way to save the priors you specify in the jags model part as a list or something that would not force me to copy and paste (then exponentially rising the likelihood of errors) all the distributions with their own parameters?

I have the following piece of code

cat(
'model{
    for(i in 1:n){
        P.hat[i]    ~  dnorm(pi, df/sigma2)
        SS[i]       ~  dgamma((df-1)/2, sigma2/2 )
        R[i]        ~  dbin(theta, N)   
    }
    # relations
    gam         <- m*vs+(1-m)*va
    theta       <- (pi*beta*gam)/(gam*dt+(1-gam)*du)
    # numerical values      
    df          <- 15
    # priors
    pi          ~  dnorm(0.05, 2)I(0,1) 
    sigma2      ~  dgamma(2, 0.1*df)
    beta        ~  dunif(0, 0.4)
    m           ~  dbeta(1, 4)                   
    vs          ~  dbeta(2, 9)                  
    va          ~  dbeta(2, 5)                  
    dt          ~  dnorm(0.3, 2)I(0,10)         
    du          ~  dnorm(1.25, 2)I(0,10)        
}',
file='model1.bug')

and I would like to "save" the "priors" section.

Thanks in advance for all your answers! EM

1
Questions about code belong on Stack Overflow. We will migrate this for you. - gung - Reinstate Monica

1 Answers

4
votes

The short answer is no - JAGS (and BUGS) make no explicit distinction between what you define as priors and the other distributions in the model, so there is no way to ask JAGS to give you information on specific sub-sections of the model. The usual way to look at your prior distributions is to plot (or otherwise summarise) them separately within R.

However, there is a trick that will work with your model to get what you want: set the upper index of your loop (n) to 0 (in the data). This will cause JAGS to totally ignore everything within that for loop, effectively removing the likelihood component of your model, leaving only the priors. If you monitor pi, sigma2 etc etc you should see a distribution of the priors for these parameters. As there is no likelihood to compute, you should also see the model runs much faster! You do need to run the model twice though (once for the priors and once with the data as normal for the posteriors).