I am trying to test the hypothesis of market efficiency in bookmaker odds for football matches. I have estimated a multinomial logit model with the mlogit package:
Model: outcome=log(P1/Px)+log(P2/Px)
where P1 is the implicit bookie probability of a home win, Px is the implicit bookie probability of a draw, etc. Draw (x) is the reference category.
Now I want to use a likelihood-based test (LR,Wald or LM) for the following hypothesis:
H0: β1=(0,1,0), β2=(0,0,1)
Ie: under the null hypothesis the intercept coefficient is 0 for both regressions. The coefficient for the logit of home win is 1 when y=homewin, and 0 when y=away win. The coefficient for the logit of away win is 0 when y=home win, and 1 when y=away win.
I am having trouble understanding how to fit the constrained model (the H0-model), from which I would extract a loglikelihood to compare with the ditto received from the ML-estimated model in an LR-test.
I have tried following the instructions from page 57 here: https://cran.r-project.org/web/packages/mlogit/vignettes/mlogit.pdf
but I don't understand how to specify my H0-model using the update()-function. Is it possible?
If you know how to do an equivalent test using the nnet (multinom) package, perhaps using "offset", an explanation of how to do that would also be very appreciated.
Thanks for any help!