Our survey paper on “computing Bayes“, written with my friends Gael Martin [who led this project most efficiently!] and David Frazier, has now been revised and resubmitted, the new version being now available on arXiv. Recognising that the entire range of the literature cannot be encompassed within a single review, esp. wrt the theoretical advances made on MCMC, the revised version is more focussed on the approximative solutions (when considering MCMC as “exact”!). As put by one of the referees [which were all very supportive of the paper], “the authors are very brave. To cover in a review paper the computational methods for Bayesian inference is indeed a monumental task and in a way an hopeless one”. This is the opportunity to congratulate Gael on her election to the Academy of Social Sciences of Australia last month. (Along with her colleague from Monash, Rob Hyndman.)
Archive for revision
computing Bayes 2.0
Posted in Books, Statistics, University life with tags Approximate Bayesian computation, arXiv, ASSA, Australia, Bayesian computing, MCMC, Monash University, Monte Carlo methods, Monte Carlo Statistical Methods, review, revision, survey on December 11, 2020 by xi'anNested Sampling SMC [a reply]
Posted in Books, Statistics, University life with tags ANS-SMC, Australia, nested sampling, phase transition, Queensland University of Technology, reply, revision, short term memory, SMC, temperature schedule on April 9, 2020 by xi'anYou may be interested to know that we are at the tail end of carrying out a major revision of the paper, which we hope will be done in the near future — there will be some new theory (we are in the final stages for a consistency proof of the ANS-SMC algorithm with new co-author Adam Johansen), as well as new numerics (including comparisons to Nested Sampling), and additional discussion that clarifies the overall narrative.A few comments relating your post that may clear some things up:
- The method you describe with the auxiliary variable is actually one of three proposed algorithms. We call this one “Improved Nested Sampling” as it is the algorithm most similar to the original Nested Sampling. Two further extensions are the adaptive SMC sampler, and the fixed SMC sampler – the latter of which is provably consistent and unbiased for the model evidence (we also often see improvements over standard NS for similar computational effort when MCMC is used).
- Regarding computational effort – it is the same for Improved NS (in fact, you can obtain the standard Nested Sampling evidence estimate from the same computational run!). For the adaptive variant, the computational effort is roughly the same for ρ = e⁻¹. In the current version of the paper this is only discussed briefly (last page of p.23). However, in the revision we will include additional experiments comparing the practical performance.
- Regarding the question of “why not regular SMC”; we chose to focus more on why SMC is a good way to do Nested Sampling rather than why Nested Sampling is a good way to do SMC. Our main priority was to show there is a lot of opportunity to develop new nested sampling style algorithms by approaching it from a different angle. That said, Nested Sampling’s primary advantage over standard SMC seems to be in problems involving “phase transitions’’ such as our first example, for which temperature based methods are inherently ill-suited (and will often fail to detect so!).
revised empirical HMC
Posted in Statistics, University life with tags eHMC, github, Hamiltonian Monte Carlo, leapfrog integrator, NUTS, Rao-Blackwellisation, revision, scaling, STAN on March 12, 2019 by xi'anFollowing the informed and helpful comments from Matt Graham and Bob Carpenter on our eHMC paper [arXival] last month, we produced a revised and re-arXived version of the paper based on new experiments ran by Changye Wu and Julien Stoehr. Here are some quick replies to these comments, reproduced for convenience. (Warning: this is a loooong post, much longer than usual.) Continue reading
mixture modelling for testing hypotheses
Posted in Books, Statistics, University life with tags Bayes factor, Bayesian hypothesis testing, Christophe Andrieu, controlled MCMC, JRSSB, peer review, Read paper, revision, testing as mixture estimation, Ultimixt, University of Bristol on January 4, 2019 by xi'anAfter a fairly long delay (since the first version was posted and submitted in December 2014), we eventually revised and resubmitted our paper with Kaniav Kamary [who has now graduated], Kerrie Mengersen, and Judith Rousseau on the final day of 2018. The main reason for this massive delay is mine’s, as I got fairly depressed by the general tone of the dozen of reviews we received after submitting the paper as a Read Paper in the Journal of the Royal Statistical Society. Despite a rather opposite reaction from the community (an admittedly biased sample!) including two dozens of citations in other papers. (There seems to be a pattern in my submissions of Read Papers, witness our earlier and unsuccessful attempt with Christophe Andrieu in the early 2000’s with the paper on controlled MCMC, leading to 121 citations so far according to G scholar.) Anyway, thanks to my co-authors keeping up the fight!, we started working on a revision including stronger convergence results, managing to show that the approach leads to an optimal separation rate, contrary to the Bayes factor which has an extra √log(n) factor. This may sound paradoxical since, while the Bayes factor converges to 0 under the alternative model exponentially quickly, the convergence rate of the mixture weight α to 1 is of order 1/√n, but this does not mean that the separation rate of the procedure based on the mixture model is worse than that of the Bayes factor. On the contrary, while it is well known that the Bayes factor leads to a separation rate of order √log(n) in parametric models, we show that our approach can lead to a testing procedure with a better separation rate of order 1/√n. We also studied a non-parametric setting where the null is a specified family of distributions (e.g., Gaussians) and the alternative is a Dirichlet process mixture. Establishing that the posterior distribution concentrates around the null at the rate √log(n)/√n. We thus resubmitted the paper for publication, although not as a Read Paper, with hopefully more luck this time!
barbed WIREs
Posted in Books, Kids, University life with tags commercial editor, computational statistics, John Wiley, managing editor, revision, WIREs, WIREs Computational Statistics on July 14, 2018 by xi'an
Maybe childishly, I am fairly unhappy with the way the submission of our Accelerating MCMC review was handled by WIREs Computational Statistics, i.e., Wiley, at the production stage. For some reason, or another, I sent the wrong bibTeX file with my LaTeX document [created using the style file imposed by WIREs]. Rather than pointing out the numerous missing entries, the production staff started working on the paper and sent us a proof with an endless list of queries related to these missing references. When I sent back the corrected LaTeX and bibTeX files, it answered back that it was too late to modify the files as it would “require re-work of [the] already processed paper which is also not a standard process for the journal”. Meaning in clearer terms that Wiley does not want to pay any additional time spent on this paper and that I have to provide from my own “free” time to make up for this mess…