Yesterday, I gave this talk in the Model Assessment working group seminar at SAMSI, in connection with the Sequential Monte Carlo program that SAMSI runs this year. The format was quite nice as the two-hour schedule allowed for a lot of questions and interruptions (as well as my experimenting with smart board writing!). The talk is based on several papers written with Jean-Michel Marin this year and on the nested sampling paper with Nicolas Chopin discussed there a few days ago. (This will also be the topic of my advanced graduate course at CREST next February.) The methods that generated the most comments were
- reverse importance sampling, à la Gelfand & Dey (1994), which is one very elegant method, even though it may be prone to misbehaviour as it relates to harmonic means (see Radford Neal’s point). (Interestingly [?], googling on that term leads to links to Ò Ruanaidh & Fitzgerald’s (1996) book.)
- bridge sampling, à la Gelman & Meng (1998), especially for its curious connection with mixture sampling and defensive sampling.
- Chib’s (1995) marginal likelihood estimator for latent variable models, both because of the label switching difficulty (that was maybe lost on part of the audience) and because of the direct permutation fix.
- nested sampling, for both its formal simplicity and not-so-simple implementation.
The part about cross-model methods did not seem so interesting to the audience, maybe because it is mostly negative. The overall sobering conclusion, however, was that most of those methods were likely to fail in large dimensions, which is true when using (as we do) importance functions derived from nonparametric principles.