Next Fall, there will be a workshop at CIRM, Luminy, Marseilles, on Bayesian learning. It takes place 22-29 October 2021 on this wonderful campus at the border with the beautiful Parc National des Calanques, in a wonderfully renovated CIRM building and involves friends and colleagues of mine as organisers and plenary speakers. (I am not involved!, but plan to organise a scalable MCMC workshop there the year after!) The conference is well-supported and the housing fees will be minimal since the centre is also subsidized by CNRS. The deadline for contributed talks and posters is 22 March, while it is 15 June for registration. Hopefully by this time the horizon will have cleared up enough to consider traveling and meeting again. Hopefully. (In which case I will miss this wonderful conference due to other meeting and teaching commitments in the Fall.)
Archive for scalable MCMC
end-to-end Bayesian learning [CIRM]
Posted in Books, Kids, Mountains, pictures, Running, Statistics, University life with tags Bayesian learning, calanques, call for contributions, Casa Matemática Oaxaca, CIRM, CNRS, Fall, high dimensions, Luminy, Marseille, Méditerranée, mini-courses, Parc National des Calanques, robust Bayesian methods, scalable MCMC, Société Mathématique de France, Sugiton, workshop on February 1, 2021 by xi'anscalable Metropolis-Hastings, nested Monte Carlo, and normalising flows
Posted in Books, pictures, Statistics, University life with tags Bayesian neural networks, Bernstein-von Mises theorem, CIF, computing cost, conferences, density approximation, dissertation, doubly intractable posterior, evidence, ICML 2019, ICML 2020, image analysis, International Conference on Machine Learning, L¹ convergence, logistic regression, nesting Monte Carlo, normalising flow, PhD, probabilistic programming, quarantine, SAME algorithm, scalable MCMC, thesis defence, University of Oxford, variational autoencoders, viva on June 16, 2020 by xi'anOver a sunny if quarantined Sunday, I started reading the PhD dissertation of Rob Cornish, Oxford University, as I am the external member of his viva committee. Ending up in a highly pleasant afternoon discussing this thesis over a (remote) viva yesterday. (If bemoaning a lost opportunity to visit Oxford!) The introduction to the viva was most helpful and set the results within the different time and geographical zones of the Ph.D since Rob had to switch from one group of advisors in Engineering to another group in Statistics. Plus an encompassing prospective discussion, expressing pessimism at exact MCMC for complex models and looking forward further advances in probabilistic programming.
Made of three papers, the thesis includes this ICML 2019 [remember the era when there were conferences?!] paper on scalable Metropolis-Hastings, by Rob Cornish, Paul Vanetti, Alexandre Bouchard-Côté, Georges Deligiannidis, and Arnaud Doucet, which I commented last year. Which achieves a remarkable and paradoxical O(1/√n) cost per iteration, provided (global) lower bounds are found on the (local) Metropolis-Hastings acceptance probabilities since they allow for Poisson thinning à la Devroye (1986) and second order Taylor expansions constructed for all components of the target, with the third order derivatives providing bounds. However, the variability of the acceptance probability gets higher, which induces a longer but still manageable if the concentration of the posterior is in tune with the Bernstein von Mises asymptotics. I had not paid enough attention in my first read at the strong theoretical justification for the method, relying on the convergence of MAP estimates in well- and (some) mis-specified settings. Now, I would have liked to see the paper dealing with a more complex problem that logistic regression.
The second paper in the thesis is an ICML 2018 proceeding by Tom Rainforth, Robert Cornish, Hongseok Yang, Andrew Warrington, and Frank Wood, which considers Monte Carlo problems involving several nested expectations in a non-linear manner, meaning that (a) several levels of Monte Carlo approximations are required, with associated asymptotics, and (b) the resulting overall estimator is biased. This includes common doubly intractable posteriors, obviously, as well as (Bayesian) design and control problems. [And it has nothing to do with nested sampling.] The resolution chosen by the authors is strictly plug-in, in that they replace each level in the nesting with a Monte Carlo substitute and do not attempt to reduce the bias. Which means a wide range of solutions (other than the plug-in one) could have been investigated, including bootstrap maybe. For instance, Bayesian design is presented as an application of the approach, but since it relies on the log-evidence, there exist several versions for estimating (unbiasedly) this log-evidence. Similarly, the Forsythe-von Neumann technique applies to arbitrary transforms of a primary integral. The central discussion dwells on the optimal choice of the volume of simulations at each level, optimal in terms of asymptotic MSE. Or rather asymptotic bound on the MSE. The interesting result being that the outer expectation requires the square of the number of simulations for the other expectations. Which all need converge to infinity. A trick in finding an estimator for a polynomial transform reminded me of the SAME algorithm in that it duplicated the simulations as many times as the highest power of the polynomial. (The ‘Og briefly reported on this paper… four years ago.)
The third and last part of the thesis is a proposal [to appear in ICML 20] on relaxing bijectivity constraints in normalising flows with continuously index flows. (Or CIF. As Rob made a joke about this cleaning brand, let me add (?) to that joke by mentioning that looking at CIF and bijections is less dangerous in a Trump cum COVID era at CIF and injections!) With Anthony Caterini, George Deligiannidis and Arnaud Doucet as co-authors. I am much less familiar with this area and hence a wee bit puzzled at the purpose of removing what I understand to be an appealing side of normalising flows, namely to produce a manageable representation of density functions as a combination of bijective and differentiable functions of a baseline random vector, like a standard Normal vector. The argument made in the paper is that imposing this representation of the density imposes a constraint on the topology of its support since said support is homeomorphic to the support of the baseline random vector. While the supporting theoretical argument is a mathematical theorem that shows the Lipschitz bound on the transform should be infinity in the case the supports are topologically different, these arguments may be overly theoretical when faced with the practical implications of the replacement strategy. I somewhat miss its overall strength given that the whole point seems to be in approximating a density function, based on a finite sample.
postdoc at Warwick on robust SMC [call]
Posted in Kids, pictures, R, Statistics, University life with tags Alan Turing Institute, EPSRC, London, research position, scalable MCMC, sequential Monte Carlo, stochastic algorithms, University of Warwick, Warwick on January 11, 2020 by xi'anHere is a call for a research fellow at the University of Warwick to work with Adam Johansen and Théo Damoulas on the EPSRC and Lloyds Register Foundaton funded project “Robust Scalable Sequential Monte Carlo with application to Urban Air Quality”. To quote
The position will be based primarily at the Department of Statistics of the University of Warwick. The post holder will work closely in collaboration with the rest of the project team and another postdoctoral researcher to be recruited shortly to work within the Data Centric Engineering programme at the Alan Turing Institute in London. The post holder will be expected to visit the Alan Turing Institute regularly.
Candidates with strong backgrounds in the mathematical analysis of stochastic algorithms or sequential Monte Carlo methods are particularly encouraged to apply. Closing date is 19 Jan 2020.
distributed posteriors
Posted in Books, Statistics, Travel, University life with tags CDT, high dimensions, minimaxity, OxWaSP, parallel MCMC, scalable MCMC, statistical theory, University of Warwick on February 27, 2019 by xi'anAnother presentation by our OxWaSP students introduced me to the notion of distributed posteriors, following a 2018 paper by Botond Szabó and Harry van Zanten. Which corresponds to the construction of posteriors when conducting a divide & conquer strategy. The authors show that an adaptation of the prior to the division of the sample is necessary to recover the (minimax) convergence rate obtained in the non-distributed case. This is somewhat annoying, except that the adaptation amounts to take the original prior to the power 1/m, when m is the number of divisions. They further show that when the regularity (parameter) of the model is unknown, the optimal rate cannot be recovered unless stronger assumptions are made on the non-zero parameters of the model.
“First of all, we show that depending on the communication budget, it might be advantageous to group local machines and let different groups work on different aspects of the high-dimensional object of interest. Secondly, we show that it is possible to have adaptation in communication restricted distributed settings, i.e. to have data-driven tuning that automatically achieves the correct bias-variance trade-off.”
I find the paper of considerable interest for scalable MCMC methods, even though the setting may happen to sound too formal, because the study incorporates parallel computing constraints. (Although I did not investigate the more theoretical aspects of the paper.)
scalable Metropolis-Hastings
Posted in Books, Statistics, Travel with tags delayed acceptance, Fukui-Todo procedure, Hamiltonian Monte Carlo, Langevin MCMC algorithm, PDMP, scalable MCMC, scaling, Taylor expansion, thinning, University of Oxford on February 12, 2019 by xi'anAmong the flury of arXived papers of last week (414!), including a fair chunk of papers submitted to ICML 2019, I spotted one entry by Cornish et al. on scalable Metropolis-Hastings, which Arnaud Doucet had mentioned to me yesterday when in Oxford. The paper builds on the delayed acceptance paper we wrote with Marco Banterlé, Clara Grazian and Anthony Lee, itself relying on a factorisation decomposition of the likelihood, combined with control variate accelerating techniques. The factorisation of both the target and the proposal allows for a (less efficient) Metropolis-Hastings acceptance ratio that is the product
of individual Metropolis-Hastings acceptance ratios, but which allows for quicker rejection if one of the probabilities in the product is small, because the corresponding Bernoulli draw is zero with high probability. One advance made in Michel et al. (2017) [which I doubly missed] is that subsampling is achievable by thinning (as in PDMPs, where these authors have been quite active) through an algorithm of Shantikumar (1985) [described in Devroye’s bible]. Provided each Metropolis-Hastings probability can be lower bounded:
by a term where the transition φ does not depend on the index i in the product. The computing cost of the thinning process thus depends on the efficiency of the subsampling, namely whether or not the (Poisson) number of terms is much smaller than m, number of terms in the product. A neat trick in the current paper that extends the the Fukui-Todo procedure is to switch to the original Metropolis-Hastings when the overall lower bound is too small, recovering the geometric ergodicity of this original if it holds (Theorem 2.1). Another neat remark is that when using the naïve factorisation as the product of the n individual likelihoods, the resulting algorithm is sort of doomed as n grows, even with an optimal scaling of the proposals. To achieve scalability, the authors introduce a Taylor (i.e., Gaussian) approximation to each local target in the product and start the acceptance decomposition by using the resulting overall Gaussian approximation. Meaning that the remaining product is now made of ratios of targets over their local Taylor approximations, hence most likely close to one. And potentially lower-bounded by the remainder term in the Taylor expansion. Leading to the conclusion that, when everything goes well, meaning that the Taylor expansions can be conducted and the bounds derived for the appropriate expansion, the order of the Poisson scale is O(1/√n)..! The proposal for the Metropolis-Hastings move is actually tuned to the Gaussian approximation, appearing as a variant of the Langevin move or more exactly a discretization of an Hamiltonian move. Obviously, I cannot judge of the complexity in implementing this new scheme from just reading the paper, but this development on the split target is definitely an exciting prospect for handling huge datasets and their friends!