**V**ivek Roy, Aixian Tan and James Flegal arXived a new paper, Estimating standard errors for importance sampling estimators with multiple Markov chains, where they obtain a central limit theorem and hence standard error estimates when using several MCMC chains to simulate from a mixture distribution as an importance sampling function. Just before I boarded my plane from Amsterdam to Calgary, which gave me the opportunity to read it completely (along with half a dozen other papers, since it is a long flight!) I first thought it was connecting to our AMIS algorithm (on which convergence Vivek spent a few frustrating weeks when he visited me at the end of his PhD), because of the mixture structure. This is actually altogether different, in that a mixture is made of unnormalised complex enough densities, to act as an importance sampler, and that, due to this complexity, the components can only be simulated via separate MCMC algorithms. Behind this characterisation lurks the challenging problem of estimating multiple normalising constants. The paper adopts the resolution by reverse logistic regression advocated in Charlie Geyer’s famous 1994 unpublished technical report. Beside the technical difficulties in establishing a CLT in this convoluted setup, the notion of mixing importance sampling and different Markov chains is quite appealing, especially in the domain of “tall” data and of splitting the likelihood in several or even many bits, since the mixture contains most of the information provided by the true posterior and can be corrected by an importance sampling step. In this very setting, I also think more adaptive schemes could be found to determine (estimate?!) the optimal weights of the mixture components.

## Archive for Monte Carlo Statistical Methods

## importance sampling with multiple MCMC sequences

Posted in Mountains, pictures, Statistics, Travel, University life with tags adaptive MCMC, Ames, AMIS, Amsterdam, Charlie Geyer, importance sampling, Iowa, MCMC, Monte Carlo Statistical Methods, normalising constant, splitting data on October 2, 2015 by xi'an## a simulated annealing approach to Bayesian inference

Posted in Books, pictures, Statistics, University life with tags ABC, ABC-MCMC, ABC-SMC, Bayesian Analysis, endoreversibility, mixture, Monte Carlo Statistical Methods, particle system, sequential Monte Carlo, simulated annealing, Switzerland on October 1, 2015 by xi'an **A** misleading title if any! Carlos Albert arXived a paper with this title this morning and I rushed to read it. Because it sounded like Bayesian analysis could be expressed as a special form of simulated annealing. But it happens to be a rather technical sequel [“that complies with physics standards”] to another paper I had missed, A simulated annealing approach to ABC, by Carlos Albert, Hans Künsch, and Andreas Scheidegger. Paper that appeared in Statistics and Computing last year, and which is most interesting!

“These update steps are associated with a flow of entropy from the system (the ensemble of particles in the product space of parameters and outputs) tothe environment. Part of this flow is due to the decrease of entropy in the system when it transforms from the prior to the posterior state and constitutes the well-invested part of computation. Since the process happens in finite time, inevitably, additional entropy is produced. This entropy production is used as a measure of the wasted computation and minimized, as previously suggested for adaptive simulated annealing” (p.3)

The notion behind this simulated annealing intrusion into the ABC world is that the choice of the tolerance can be adapted along iterations according to a simulated annealing schedule. Both papers make use of thermodynamics notions that are completely foreign to me, like endoreversibility, but aim at minimising the “entropy production of the system, which is a measure for the waste of computation”. The central innovation is to introduce an augmented target on (θ,x) that is

f(x|θ)π(θ)exp{-ρ(x,y)/ε},

where ε is the tolerance, while ρ(x,y) is a measure of distance to the actual observations, and to treat ε as an annealing temperature. In an ABC-MCMC implementation, the acceptance probability of a random walk proposal (θ’,x’) is then

exp{ρ(x,y)/ε-ρ(x’,y)/ε}∧1.

Under some regularity constraints, the sequence of targets converges to

π(θ|y)exp{-ρ(x,y)},

if ε decreases slowly enough to zero. While the representation of ABC-MCMC through kernels other than the Heaviside function can be found in the earlier ABC literature, the embedding of tolerance updating within the modern theory of simulated annealing is rather exciting.

“

Furthermore, we will present an adaptive schedule that attempts convergence to the correct posterior while minimizing the required simulations from the likelihood. Both the jump distribution in parameter space and the tolerance are adapted using mean fields of the ensemble.” (p.2)

What I cannot infer from a rather quick perusal of the papers is whether or not the implementation gets into the way of the all-inclusive theory. For instance, how can the Markov chain keep moving as the tolerance gets to zero? Even with a particle population and a sequential Monte Carlo implementation, it is unclear why the proposal scale factor [as in equation (34)] does not collapse to zero in order to ensure a non-zero acceptance rate. In the published paper, the authors used the same toy mixture example as ours [from Sisson et al., 2007], where we earned the award of the “incredibly ugly squalid picture”, with improvements in the effective sample size, but this remains a toy example. *(Hopefully a post to be continued in more depth…)*

## Je reviendrai à Montréal [NIPS 2015]

Posted in pictures, Statistics, Travel, University life with tags ABC, ABC in Montréal, Approximate Bayesian computation, Bayesian inference, Canada, MCMC, Monte Carlo integration, Monte Carlo Statistical Methods, Montréal, NIPS, NIPS 2015, probabilistic numerics, Québec, Robert Charlebois, scalability on September 30, 2015 by xi'an**I** will be back in Montréal, as the song by Robert Charlebois goes, for the NIPS 2015 meeting there, more precisely for the workshops of December 11 and 12, 2015, on probabilistic numerics and ABC [à Montréal]. I was invited to give the first talk by the organisers of the NIPS workshop on probabilistic numerics, presumably to present a contrapuntal perspective on this mix of Bayesian inference with numerical issues, following my somewhat critical posts on the topic. And I also plan to attend some lectures in the (second) NIPS workshop on ABC methods. Which does not leave much free space for yet another workshop on Approximate Bayesian Inference! The day after, while I am flying back to London, there will be a workshop on scalable Monte Carlo. All workshops are calling for contributed papers to be presented during central poster sessions. To be submitted to abcinmontreal@gmail.com and to probnum@gmail.com and to aabi2015. Before October 16.

Funny enough, I got a joking email from Brad, bemoaning my traitorous participation to the workshop on probabilistic numerics because of its “anti-MCMC” agenda, reflected in the summary:

“Integration is the central numerical operation required for Bayesian machine learning (in the form of marginalization and conditioning). Sampling algorithms still abound in this area, although it has long been known that Monte Carlo methods are fundamentally sub-optimal. The challenges for the development of better performing integration methods are mostly algorithmic. Moreover, recent algorithms have begun to outperform MCMC and its siblings, in wall-clock time, on realistic problems from machine learning.

The workshop will review the existing, by now quite strong, theoretical case against the use of random numbers for integration, discuss recent algorithmic developments, relationships between conceptual approaches, and highlight central research challenges going forward.”

Position that I hope to water down in my talk! In any case,

Je veux revoir le long désert

Des rues qui n’en finissent pas

Qui vont jusqu’au bout de l’hiver

Sans qu’il y ait trace de pas

## Non-reversible Markov Chains for Monte Carlo sampling

Posted in pictures, Statistics, Travel, University life with tags ABC, Alan Turing Institute, CRiSM, Hamiltonian Monte Carlo, intractable likelihood, lifting, Monte Carlo Statistical Methods, non-reversible diffusion, NUTS, overdamped Langevin algorithm, random walk, University of Warwick, workshop on September 24, 2015 by xi'an**T**his “week in Warwick” was not chosen at random as I was aware there is a workshop on non-reversible MCMC going on. (Even though CRiSM sponsored so many workshops in September that almost any week would have worked for the above sentence!) It has always been kind of a mystery to me that non-reversibility could make a massive difference in practice, even though I am quite aware that it does. And I can grasp some of the theoretical arguments why it does. So it was quite rewarding to sit in this Warwick amphitheatre and learn about overdamped Langevin algorithms and other non-reversible diffusions, to see results where convergence times moved from n to √n, and to grasp some of the appeal of lifting albeit in finite state spaces. Plus, the cartoon presentation of Hamiltonian Monte Carlo by Michael Betancourt was a great moment, not only because of the satellite bursting into flames on the screen but also because it gave a very welcome intuition about why reversibility was inefficient and HMC appealing. So I am grateful to my two colleagues, Joris Bierkens and Gareth Roberts, for organising this exciting workshop, with a most profitable scheduling favouring long and few talks. My next visit to Warwick will also coincide with a workshop on intractable likelihood, next November. This time part of the new Alan Turing Institute programme.

## MCMskv, Lenzerheide, 4-7 Jan., 2016 on a shoestring [news #3]

Posted in Kids, Mountains, pictures, Travel, University life with tags accommodation, airbnb, Bayesian computation, Chur, ISBA, Lenzerheide, lodging, MCMSki, MCMskv, Monte Carlo Statistical Methods, Sankt Moritz, ski town, Switzerland, Zurich on September 16, 2015 by xi'an**A**s the ‘Og received several comments about the accommodation costs for BayesComp MCMski V, which are indeed rather high if one only follows the suggestions on the lodging webpage, I started checking for cheaper alternatives in Lenzerheide and around. On booking.com, I found several local hotels and studios from 100€ to 200€ per night for two or three guests, with breakfast included. The offer on airbnb was quite limited but I still managed to secure a small chalet at about 50€ per person and per night. There are more opportunities in nearby villages, for instance Tiefencastel, 11km away with a 19mn bus connection. Chur is 18km away with a slower 39mn bus connection, but with a very wide range of offers. Savognin, near the pricey Sankt Moritz is 20km away, with other cheap alternatives. Which may even make renting a car worth the expense if split between 3 or 4. Note also that low-cost airlines fly to Zürich from major European cities. For instance, Easyjet is currently offering a round trip from London for 72€…

## scaling the Gibbs posterior credible regions

Posted in Books, Statistics, University life with tags Bayesian inference, convergence of Gibbs samplers, empirical likelihood, Gibbs posterior, likelihood-free methods, matching priors, Monte Carlo Statistical Methods, parametrisation on September 11, 2015 by xi'an

“The challenge in implementation of the Gibbs posterior is that it depends on an unspecified scale (or inverse temperature) parameter.”

**A** new paper by Nick Syring and Ryan Martin was arXived today on the same topic as the one I discussed last January. The setting is the same as with empirical likelihood, namely that the distribution of the data is not specified, while parameters of interest are defined via moments or, more generally, a minimising a loss function. A pseudo-likelihood can then be constructed as a substitute to the likelihood, in the spirit of Bissiri et al. (2013). It is called a “Gibbs posterior” distribution in this paper. So the “Gibbs” in the title has no link with the “Gibbs” in Gibbs sampler, since inference is conducted with respect to this pseudo-posterior. Somewhat logically (!), as n grows to infinity, the pseudo- posterior concentrates upon the pseudo-true value of θ minimising the expected loss, hence asymptotically resembles to the M-estimator associated with this criterion. As I pointed out in the discussion of Bissiri et al. (2013), one major hurdle when turning a loss into a log-likelihood is that it is at best defined up to a scale factor ω. The authors choose ω so that the Gibbs posterior

is well-calibrated. Where *l*_{n} is the empirical averaged loss. So the Gibbs posterior is part of the matching prior collection. In practice the authors calibrate ω by a stochastic optimisation iterative process, with bootstrap on the side to evaluate coverage. They briefly consider empirical likelihood as an alternative, on a median regression example, where they show that their “Gibbs confidence intervals (…) are clearly the best” (p.12). Apart from the relevance of being “well-calibrated”, and the asymptotic nature of the results. and the dependence on the parameterisation via the loss function, one may also question the possibility of using this approach in large dimensional cases where all of or none of the parameters are of interest.

## MCMskv, Lenzerheide, 4-7 Jan., 2016 [news #2]

Posted in Mountains, pictures, Statistics, Travel, University life with tags BayesComp, Bayesian computation, ISBA, Lenzerheide, MCMSki, MCMskv, Monte Carlo Statistical Methods, Richard Tweedie, Schweizerhof, ski town, STAN, Switzerland, Zurich on September 7, 2015 by xi'an**A** quick reminder that the early bird registration deadline for BayesComp MCMski V is drawing near. And reminding Og’s readers that there will be a “Breaking news” session to highlight major advances among poster submissions. For which they can apply when sending the poster template. In addition, there is only a limited number of hotel rooms at the Schweizerhof, the main conference hotel and the first 40 participants who will make a reservation there will get a free one-day skipass!