**T**his was a most busy and profitable week in Warwick as, in addition to meeting with local researchers and students on a wide range of questions and projects, giving an extended seminar to MASDOC students, attending as many seminars as humanly possible (!), and preparing a 5k race by running in the Warwickshire countryside (in the dark and in the rain), I received the visits of Kerrie Mengersen, Judith Rousseau and Jean-Michel Marin, with whom I made some progress on papers we are writing together. In particular, Jean-Michel and I wrote the skeleton of a paper we (still) plan to submit to COLT 2014 next week. And Judith, Kerrie and I drafted new if paradoxical aconnections between empirical likelihood and model selection. Jean-Michel and Judith also gave talks at the CRiSM seminar, Jean-Michel presenting the latest developments on the convergence of our AMIS algorithm, Judith summarising several papers on the analysis of empirical Bayes methods in non-parametric settings.

## Archive for AMIS

## my week at War[wick]

Posted in pictures, Running, Statistics, Travel, Uncategorized with tags ABC, AMIS, Bayesian asymptotics, COLT2014, empirical Bayes methods, empirical likelihood, MASDOC, University of Warwick, Warwickshire, Zeeman building on February 1, 2014 by xi'an## MCMSki IV [day 2.5]

Posted in Mountains, pictures, Statistics, University life with tags ABC, AMIS, extremes, parallelisation, poster session, raclette, SNPs, sticky Metropolis, synthetic likelihood, warhammer on January 8, 2014 by xi'an**D**espite a good rest during the ski break, my cold did not get away (no magic left in this world!) and I thus had a low attention span to attend the *Bayesian statistics and Population genetics* session: while Jukka Corander mentioned the improvement brought by our AMIS algorithm, I had difficulties getting the nature of the model, if only because he used a blackboard-like font that made math symbols too tiny to read. (Nice fonts, otherwise!), Daniel Lawson (of vomiting Warhammer fame!) talked about the alluring notion of a statistical emulator, and Barbara Engelhardt talked about variable selection in a SNP setting. I did not get a feeling on how handling ten millions of SNPs was possible in towards a variable selection goal. My final session of the day was actually “my” invited session on ABC methods, where Richard Everitt presented a way of mixing exact approximation with ABC and synthetic likelihood (Wood, *Nature*) approximations. The resulting MAVIS algorithm is not out yet. The second speaker was Ollie Ratman, who spoke on his accurate ABC that I have discussed many times here. And Jean-Michel Marin managed to drive from Montpelier, just in time to deliver his talk on our various explorations of the ABC model choice problem.

**A**fter a quick raclette at “home”, we headed back to the second poster session, where I had enough of a clear mind and not too much of a headache (!) to have several interesting discussions, incl. a new parallelisation suggested by Ben Calderhead, the sticky Metropolis algorithm of Luca Martino, the airport management video of Jegar Pitchforth, the mixture of Dirichlet distributions for extremes by Anne Sabourin, not mentioning posters from Warwick or Paris. At the end of the evening I walked back to my apartment with the Blossom skis we had brought in the morning to attract registrations for the ski race: not enough to make up for the amount charged by the ski school. Too bad, especially given Anto’s efforts to get this amazing sponsoring!

## Initializing adaptive importance sampling with Markov chains

Posted in Statistics with tags AMIS, arXiv, cosmoPMC, evidence, Kullback, marginal likelihood, Multinest, nested sampling, PMC, population Monte Carlo, sequential Monte Carlo, simulation on May 6, 2013 by xi'an**A**nother paper recently arXived by Beaujean and Caldwell elaborated on our population Monte Carlo papers (Cappé et al., 2005, Douc et al., 2007, Wraith et al., 2010) to design a more thorough starting distribution. Interestingly, the authors mention the fact that PMC is an EM-type algorithm to emphasize the importance of the starting distribution, as with “poor proposal, PMC fails as proposal updates lead to a consecutively poorer approximation of the target” (p.2). I had not thought of this possible feature of PMC, which indeed proceeds along integrated EM steps, and thus could converge to a local optimum (if not poorer than the start as the Kullback-Leibler divergence decreases).

**T**he solution proposed in this paper is similar to the one we developed in our AMIS paper. An important part of the simulation is dedicated to the construction of the starting distribution, which is a mixture deduced from multiple Metropolis-Hastings runs. I find the method spends an unnecessary long time on refining this mixture by culling the number of components: down-the-shelf clustering techniques should be sufficient, esp. if one considers that *the value of the target is available at every simulated point*. This has been my pet (if idle) theory for a long while: we do not take (enough) advantage of this informative feature in our simulation methods… I also find the *Student’s t versus Gaussian kernel* debate (p.6) somehow superfluous: as we shown in Douc et al., 2007, we can process Student’s *t* distributions so we can as well work with those. And rather worry about the homogeneity assumption this choice implies: working with any elliptically symmetric kernel assumes a local Euclidean structure on the parameter space, for all components, and does not model properly highly curved spaces. Another pet theory of mine’s. As for picking the necessary number of simulations at each PMC iteration, I would add to the ESS and the survival rate of the components a measure of the Kullback-Leibler divergence, as it *should decrease* at each iteration (with an infinite number of particles).

**A**nother interesting feature is in the comparison with Multinest, the current version of nested sampling, developed by Farhan Feroz. This is the second time I read a paper involving nested sampling in the past two days. While this PMC implementation does better than nested sampling on the examples processed in the paper, the Multinest outcome remains relevant, particularly because it handles multi-modality fairly well. The authors seem to think parallelisation is an issue with nested sampling, while I do see why: at the most naïve stage, several nested samplers can be run in parallel and the outcomes pulled together.

## AMIS convergence, at last!

Posted in Statistics, University life with tags AMIS, Big'MC, convergence, importance sampling, PMC, seminar, unbiasedness on November 19, 2012 by xi'an**T**his afternoon, Jean-Michel Marin gave his talk at the big’MC seminar. As already posted, it was about a convergence proof for AMIS, which gave me the opportunity to simultaneously read the paper and listen to the author. The core idea for adapting AMIS towards a manageable version is to update the proposal parameter based on the current sample rather than on the whole past. This facilitates the task of establishing convergence to the optimal (pseudo-true) value of the parameter, under an assumption that the optimal value is a know moment of the target. From there, convergence of the weighted mean is somehow natural when the number of simulations grows to infinity. (Note the special asymptotics of AMIS, though, which are that the number of steps goes to infinity while the number of simulations per step grows a wee faster than linearly. In this respect, it is the opposite of PMC, where convergence is of a more traditional nature, pushing the number of simulations per step to infinity.) The second part of the convergence proof is more intricate, as it establishes that the multiple mixture estimator based on the “forward-backward” reweighting of all simulations since step zero does converge to the proper posterior moment. This relies on rather complex assumptions, but remains a magnificent *tour de force*. During the talk, I wondered if, given the Markovian nature of the algorithm (since reweighting only occurs once simulation is over), an alternative estimator based on the optimal value of the simulation parameter would not be better than the original multiple mixture estimator: the proof is based on the equivalence between both versions….

## Bayesian seismic monitoring and big MC

Posted in Statistics, University life with tags AMIS, Bayesian statistics, consistency, Institut Henri Poincaré, LIP6, Paris, seminar on November 14, 2012 by xi'an**T**wo announcements for seminars in Paris in the coming days:

**S**tuart Russell (University of California, Berkeley, visiting Paris 6 this year) will give a seminar next week, Thursday November **22**, 10am, LIP6, Université Paris 6, on *Global Seismic Monitoring: A Bayesian Approach*. Here is the link to the LIP6 webpage.

**O**n Thursday November **15**, 3pm, Institut Henri Poincaré, Jean-Michel Marin will give a talk at our big’MC seminar on the *Consistency of Adaptive Multiple Importance Sampling* (AMIS), following a long search of ours for this proof and a recent resolution of his along with Pierre Pudlo and Mohammed Sedki! Hopefully soon discussed on the ‘Og….