Last June, Alix Marie d’Avigneau, Sumeet Singh, and Lawrence Murray arXived a paper on anytime ABC I intended to review right away but that sat till now on my virtual desk (and pile of to-cover-arXivals!). The notion of anytime MCMC was already covered in earlier ‘Og entries, but this anytime ABC version bypasses the problem of asynchronicity, namely, “randomly varying local move completion times when parallel tempering is implemented on a multi-processor computing resource”. The different temperatures are replaced by different tolerances in ABC. Since switches between tolerances are natural if a proposal for a given tolerance ε happens to be eligible for a lower tolerance ε’. And accounting for the different durations required to simulate a proposal under different tolerances to avoid the induced bias in the stationary distributions. Or the wait for other processors to complete their task. A drawback with the approach stands in calibrating the tolerance levels in advance (or via preliminary runs that may prove costly).
Archive for asynchronous algorithms
ABC, anytime!
Posted in Books, pictures, Statistics, Travel, University life with tags ABC, arXiv, asynchronous algorithms, parallel tempering on January 18, 2021 by xi'anHastings 50 years later
Posted in Books, pictures, Statistics, University life with tags 1066, asynchronous algorithms, automation, Battle of Hastings, Bayesian statistics, BUGS, history of statistics, incompatible conditionals, Metropolis-Hastings algorithms, Normans, pseudo-marginal MCMC, STAN, Wilfred Keith Hastings on January 9, 2020 by xi'anWhat is the exact impact of the Metropolis-Hastings algorithm on the field of Bayesian statistics? and what are the new tools of the trade? What I personally find the most relevant and attractive element in a review on the topic is the current role of this algorithm, rather than its past (his)story, since many such reviews have already appeared and will likely continue to appear. What matters most imho is how much the Metropolis-Hastings algorithm signifies for the community at large, especially beyond academia. Is the availability or unavailability of software like BUGS or Stan a help or an hindrance? Was Hastings’ paper the start of the era of approximate inference or the end of exact inference? Are the algorithm intrinsic features like Markovianity a fundamental cause for an eventual extinction because of the ensuing time constraint and the lack of practical guarantees of convergence and the illusion of a fully automated version? Or are emerging solutions like unbiased MCMC and asynchronous algorithms a beacon of hope?
In their Biometrika paper, Dunson and Johndrow (2019) recently wrote a celebration of Hastings’ 1970 paper in Biometrika, where they cover adaptive Metropolis (Haario et al., 1999; Roberts and Rosenthal, 2005), the importance of gradient based versions toward universal algorithms (Roberts and Tweedie, 1995; Neal, 2003), discussing the advantages of HMC over Langevin versions. They also recall the significant step represented by Peter Green’s (1995) reversible jump algorithm for multimodal and multidimensional targets, as well as tempering (Miasojedow et al., 2013; Woodard et al., 2009). They further cover intractable likelihood cases within MCMC (rather than ABC), with the use of auxiliary variables (Friel and Pettitt, 2008; Møller et al., 2006) and pseudo-marginal MCMC (Andrieu and Roberts, 2009; Andrieu and Vihola, 2016). They naturally insist upon the need to handle huge datasets, high-dimension parameter spaces, and other scalability issues, with links to unadjusted Langevin schemes (Bardenet et al., 2014; Durmus and Moulines, 2017; Welling and Teh, 2011). Similarly, Dunson and Johndrow (2019) discuss recent developments towards parallel MCMC and non-reversible schemes such as PDMP as highly promising, with a concluding section on the challenges of automatising and robustifying much further the said procedures, if only to reach a wider range of applications. The paper is well-written and contains a wealth of directions and reflections, including those in my above introduction. Here are some mostly disconnected directions I would have liked to see covered or more covered
- convergence assessment today, e.g. the comparison of various approximation schemes
- Rao-Blackwellisation and other post-processing improvements
- other approximate inference tools than the pseudo-marginal MCMC
- importance of the parameterisation of the problem for convergence
- dimension issues and connection with quasi-Monte Carlo
- constrained spaces of measure zero, as for instance matrix distributions imposing zeros outside a diagonal band
- given the rise of the machine(-learners), are exploratory and intrinsically slow algorithms like MCMC doomed or can both fields feed one another? The section on optimisation could be expanded in that direction
- the wasteful nature of the random walk feature of MCMC algorithms, as opposed to non-reversible kernels like HMC and other PDMPs, missing from the gradient based methods section (and can we once again learn from physicists?)
- finer convergence issues and hence inference difficulties with complex MCMC algorithms like Gibbs samplers with incompatible conditionals
- use of the Hastings ratio in other algorithms like ABC or EP (in link with the section on generalised Bayes)
- adapting Metropolis-Hastings methods for emerging computing tools like GPUs and quantum computers
or possibly less covered, namely data augmentation put forward when it is a special case of auxiliary variables as in slice sampling and in earlier physics literature. For instance, both probit and logistic regressions do not truly require data augmentation and are more toy examples than really challenging applications. The approach of Carlin & Chib (1995) is another illustration, which has met with recent interest, despite requiring heavy calibration (just like RJMCMC). As well as a a somewhat awkward opposition between Gibbs and Hastings, in that I am not convinced that Gibbs does not remain ultimately necessary to handle high dimension problems, in the sense that the alternative solutions like Langevin, HMC, or PDMP, or…, are relying on Euclidean assumptions for the entire vector, while a direct product of Euclidean structures may prove more adequate.
automatic adaptation of MCMC algorithms
Posted in pictures, Statistics with tags adaptive MCMC methods, arXiv, asynchronous algorithms, calibration, convergence of Gibbs samplers, Gibbs sampling, MCMC, parallelisation on March 4, 2019 by xi'an“A typical adaptive MCMC sampler will approximately optimize performance given the kind of sampler chosen in the first place, but it will not optimize among the variety of samplers that could have been chosen.”
Last February (2018), Dao Nguyen and five co-authors arXived a paper that I missed. On a new version of adaptive MCMC that aims at selecting a wider range of proposal kernels. Still requiring a by-hand selection of this collection of kernels… Among the points addressed, beyond the theoretical guarantees that the adaptive scheme does not jeopardize convergence to the proper target, are a meta-exploration of the set of combinations of samplers and integration of the computational speed in the assessment of each sampler. Including the very difficulty of assessing mixing. One could deem the index of the proposal as an extra (cyber-)parameter to its generic parameter (like the scale in the random walk), but the discreteness of this index makes the extension more delicate than expected. And justifies the distinction between internal and external parameters. The notion of a worst-mixing dimension is quite appealing and connects with the long-term hope that one could spend the maximum fraction of the sampler runtime over the directions that are poorly mixing, while still keeping the target as should be. The adaptive scheme is illustrated on several realistic models with rather convincing gains in efficiency and time.
The convergence tools are inspired from Roberts and Rosenthal (2007), with an assumption of uniform ergodicity over all kernels considered therein which is both strong and delicate to assess in practical settings. Efficiency is rather unfortunately defined in terms of effective sample size, which is a measure of correlation or lack thereof, but which does not relate to the speed of escape from the basin of attraction of the starting point. I also wonder at the pertinence of the estimation of the effective sample size when the chain is based on different successive kernels, since the lack of correlation may be due to another kernel. Another calibration issue is the internal clock that relates to the average number of iterations required to tune properly a specific kernel, which again may be difficult to assess in a realistic situation. A last query is whether or not this scheme could be compared with an asynchronous (and valid) MCMC approach that exploits parallel capacities of the computer.
convergences of MCMC and unbiasedness
Posted in pictures, Statistics, University life with tags asynchronous algorithms, Hastings-Metropolis sampler, impatient user, maximal coupling, MCMC convergence, optimal transport, parallelisation, Paris Dauphine, perfect sampling, unbiased MCMC on January 16, 2018 by xi'anDuring his talk on unbiased MCMC in Dauphine today, Pierre Jacob provided a nice illustration of the convergence modes of MCMC algorithms. With the stationary target achieved after 100 Metropolis iterations, while the mean of the target taking much more iterations to be approximated by the empirical average. Plus a nice connection between coupling time and convergence. Convergence to the target.
During Pierre’s talk, some simple questions came to mind, from developing an “impatient user version”, as in perfect sampling, in order to stop chains that run “forever”, to optimising parallelisation in order to avoid problems of asynchronicity. While the complexity of coupling increases with dimension and the coupling probability goes down, the average coupling time varies but an unexpected figure is that the expected cost per iteration is of 2 simulations, irrespective of the chosen kernels. Pierre also made a connection with optimal transport coupling and stressed that the maximal coupling was for the proposal and not for the target.