Today, I went to Milano for 13 hours to give a seminar at l’Università Bocconi. Where I thus gave a talk on Testing via mixtures (using the same slides as at ISBA last Spring). It was the first time I was in Milano (and thus at Bocconi) for more than a transfer to MCMski or to Pavia and it was great to walk through the city. And of course to meet and share with many friends there. While I glimpsed the end of the sunrise on the Italian Alps (near Monte Rosa?!), I was too late on my way back for the sunset.
Archive for the Mountains Category
A. Mootoovaloo, B. Bassett, and M. Kunz just arXived a paper on the computation of Bayes factors by the Savage-Dickey representation through a supermodel (or encompassing model). (I wonder why Savage-Dickey is so popular in astronomy and cosmology statistical papers and not so much elsewhere.) Recall that the trick is to write the Bayes factor in favour of the encompasssing model as the ratio of the posterior and of the prior for the tested parameter (thus eliminating nuisance or common parameters) at its null value,
Modulo some continuity constraints on the prior density, and the assumption that the conditional prior on nuisance parameter is the same under the null model and the encompassing model [given the null value φ⁰]. If this sounds confusing or even shocking from a mathematical perspective, check the numerous previous entries on this topic on the ‘Og!
The supermodel created by the authors is a mixture of the original models, as in our paper, and… hold the presses!, it is a mixture of the likelihood functions, as in Phil O’Neill’s and Theodore Kypraios’ paper. Which is not mentioned in the current paper and should obviously be. In the current representation, the posterior distribution on the mixture weight α is a linear function of α involving both evidences, α(m¹-m²)+m², times the artificial prior on α. The resulting estimator of the Bayes factor thus shares features with bridge sampling, reversible jump, and the importance sampling version of nested sampling we developed in our Biometrika paper. In addition to O’Neill and Kypraios’s solution.
The following quote is inaccurate since the MCMC algorithm needs simulating the parameters of the compared models in realistic settings, hence representing the multidimensional integrals by Monte Carlo versions.
“Though we have a clever way of avoiding multidimensional integrals to calculate the Bayesian Evidence, this new method requires very efficient sampling and for a small number of dimensions is not faster than individual nested sampling runs.”
I actually wonder at the sheer rationale of running an intensive MCMC sampler in such a setting, when the weight α is completely artificial. It is only used to jump from one model to the next, which sound quite inefficient when compared with simulating from both models separately and independently. This approach can also be seen as a special case of Carlin’s and Chib’s (1995) alternative to reversible jump. Using instead the Savage-Dickey representation is of course infeasible. Which makes the overall reference to this method rather inappropriate in my opinion. Further, the examples processed in the paper all involve (natural) embedded models where the original Savage-Dickey approach applies. Creating an additional model to apply a pseudo-Savage-Dickey representation does not sound very compelling…
Incidentally, the paper also includes a discussion of a weird notion, the likelihood of the Bayes factor, B¹², which is plotted as a distribution in B¹², most strangely. The only other place I met this notion is in Murray Aitkin’s book. Something’s unclear there or in my head!
“One of the fundamental choices when using the supermodel approach is how to deal with common parameters to the two models.”
This is an interesting question, although maybe not so relevant for the Bayes factor issue where it should not matter. However, as in our paper, multiplying the number of parameters in the encompassing model may hinder convergence of the MCMC chain or reduce the precision of the approximation of the Bayes factor. Again, from a Bayes factor perspective, this does not matter [while it does in our perspective].
To sort of make up for the failed attempt at Monte Rosa, we stayed an extra day and took a hike in Vale d’Aosta, starting from Cogne where we had a summer school a few years ago. And from where we started for another failed attempt at La Grivola. It was a brilliant day and we climbed to the Rifugio Vittorio Stella (2588m) [along with many many other hikers], then lost the crowds to the Colle della Rossa (3195m), which meant a 1700m easy climb. By the end of the valley, we came across steinbocks (aka bouquetins, stambecchi) resting in the sun by a creek and unfazed by our cameras. (Abele Blanc told us later that they are usually staying there, licking whatever salt they can find on the stones.)
The final climb to the pass was a bit steeper but enormously rewarding, with views of the Western Swiss Alps in full glory (Matterhorn, Combin, Breithorn) and all to ourselves. From there it was a downhill hike all the way back to our car in Cogne, 1700m, with no technical difficulty once we had crossed the few hundred meters of residual snow. And with the added reward of seeing several herds of the shy chamois mountain goat.
Except that my daughter’s rental mountaineering shoes started to make themselves heard and that she could barely walk downwards. (She eventually lost her big toe nails!) It thus took us forever to get down (despite me running to the car and back to get lighter shoes) and we came to the car at 8:30, too late to contemplate a drive back to Paris.
In his plenary talk this morning, Arnaud Doucet discussed the application of pseudo-marginal techniques to the latent variable models he has been investigating for many years. And its limiting behaviour towards efficiency, with the idea of introducing correlation in the estimation of the likelihood ratio. Reducing complexity from O(T²) to O(T√T). With the very surprising conclusion that the correlation must go to 1 at a precise rate to get this reduction, since perfect correlation would induce a bias. A massive piece of work, indeed!
The next session of the morning was another instance of conflicting talks and I hoped from one room to the next to listen to Hani Doss’s empirical Bayes estimation with intractable constants (where maybe SAME could be of interest), Youssef Marzouk’s transport maps for MCMC, which sounds like an attractive idea provided the construction of the map remains manageable, and Paul Russel’s adaptive importance sampling that somehow sounded connected with our population Monte Carlo approach. (With the additional step of considering transform maps.)
An interesting item of information I got from the final announcements at MCqMC 2016 just before heading to Monash, Melbourne, is that MCqMC 2018 will take place in the city of Rennes, Brittany, on July 2-6. Not only it is a nice location on its own, but it is most conveniently located in space and time to attend ISBA 2018 in Edinburgh the week after! Just moving from one Celtic city to another Celtic city. Along with other planned satellite workshops, this occurrence should make ISBA 2018 more attractive [if need be!] for participants from oversea.