Archive for Bayesian model evaluation

reading classics (#1)

Posted in Books, Statistics, University life with tags , , , , , , , , , on October 31, 2013 by xi'an

Here we are, back in a new school year and with new students reading the classics. Today, Ilaria Masiani started the seminar with a presentation of Spiegelhalter et al. 2002 DIC paper, already heavily mentioned on this blog. Here are the slides, posted on slideshare (if you know of another website housing and displaying slides, let me know: the incompatibility between Firefox and slideshare drives me crazy!, well, almost…)

I have already said a lot about DIC on this blog so will not add a repetition of my reservations. I enjoyed the link with the log scores and the Kullback-Leibler divergence, but failed to see a proper intuition for defining the effective number of parameters the way it is defined in the paper… The presentation was quite faithful to the original and, as is usual in the reading seminars (esp. the first one of the year), did not go far enough (for my taste) in the critical evaluation of the themes therein. Maybe an idea for next year would be to ask one of my PhD students to give the zeroth seminar…

Testing and significance

Posted in R, Statistics, University life with tags , , , , , , , on September 13, 2011 by xi'an

Julien Cornebise pointed me to this Guardian article that itself summarises the findings of a Nature Neuroscience article I cannot access. The core of the paper is that a large portion of comparative studies conclude to a significant difference between protocols when one protocol result is significantly different from zero and the other one(s) is(are) not…  From a frequentist perspective (I am not even addressing the Bayesian aspects of using those tests!), under the null hypothesis that both protocols induce the same null effect, the probability of wrongly deriving a significant difference can be evaluated by

> x=rnorm(10^6)
> y=rnorm(10^6)
> sum((abs(x)<1.96)*(abs(y)>1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31805
> sum((abs(x)>1.96)*(abs(y)<1.96)*(abs(x-y)<1.96*sqrt(2)))
[1] 31875
> (31805+31875)/10^6
[1] 0.06368

which moves to a 26% probability of error when x is drifted by 2! (The maximum error is just above 30%, when x is drifted by around 2.6…)

(This post was written before Super Andrew posted his own “difference between significant and not significant“! My own of course does not add much to the debate.)

ABC model choice not to be trusted

Posted in Mountains, R, Statistics, University life with tags , , , , , , , , , on January 27, 2011 by xi'an

This may sound like a paradoxical title given my recent production in this area of ABC approximations, especially after the disputes with Alan Templeton, but I have come to the conclusion that ABC approximations to the Bayes factor are not to be trusted. When working one afternoon in Park City with Jean-Michel and Natesh Pillai (drinking tea in front of a fake log-fire!), we looked at the limiting behaviour of the Bayes factor constructed by an ABC algorithm, ie by approximating posterior probabilities for the models from the frequencies of acceptances of simulations from those models (assuming the use of a common summary statistic to define the distance to the observations). Rather obviously (a posteriori!), we ended up with the true Bayes factor based on the distributions of the summary statistics under both models! Continue reading

Follow

Get every new post delivered to your Inbox.

Join 619 other followers