Archive for revision

Nested Sampling SMC [a reply]

Posted in Books, Statistics, University life with tags , , , , , , , , , on April 9, 2020 by xi'an
Here is a response from Robert Salomone following my comments of the earlier day (and pointing out I already commented the paper two years ago):
You may be interested to know that we are at the tail end of carrying out a major revision of the paper, which we hope will be done in the near future — there will be some new theory (we are in the final stages for a consistency proof of the ANS-SMC algorithm with new co-author Adam Johansen), as well as new numerics (including comparisons to Nested Sampling), and additional discussion that clarifies the overall narrative.
A few comments relating your post that may clear some things up:
  • The method you describe with the auxiliary variable is actually one of three proposed algorithms. We call this one “Improved Nested Sampling” as it is the algorithm most similar to the original Nested Sampling. Two further extensions are the adaptive SMC sampler, and the fixed SMC sampler – the latter of which is provably consistent and unbiased for the model evidence (we also often see improvements over standard NS for similar computational effort when MCMC is used).
  • Regarding computational effort – it is the same for Improved NS (in fact, you can obtain the standard Nested Sampling evidence estimate from the same computational run!). For the adaptive variant, the computational effort is roughly the same for ρ = e⁻¹. In the current version of the paper this is only discussed briefly (last page of p.23). However, in the revision we will include additional experiments comparing the practical performance.
  • Regarding the question of “why not regular SMC”; we chose to focus more on why SMC is a good way to do Nested Sampling rather than why Nested Sampling is a good way to do SMC. Our main priority was to show there is a lot of opportunity to develop new nested sampling style algorithms by approaching it from a different angle. That said, Nested Sampling’s primary advantage over standard SMC seems to be in problems involving “phase transitions’’ such as our first example, for which temperature based methods are inherently ill-suited (and will often fail to detect so!).

revised empirical HMC

Posted in Statistics, University life with tags , , , , , , , , on March 12, 2019 by xi'an

Following the informed and helpful comments from Matt Graham and Bob Carpenter on our eHMC paper [arXival] last month, we produced a revised and re-arXived version of the paper based on new experiments ran by Changye Wu and Julien Stoehr. Here are some quick replies to these comments, reproduced for convenience. (Warning: this is a loooong post, much longer than usual.) Continue reading

mixture modelling for testing hypotheses

Posted in Books, Statistics, University life with tags , , , , , , , , , , on January 4, 2019 by xi'an

After a fairly long delay (since the first version was posted and submitted in December 2014), we eventually revised and resubmitted our paper with Kaniav Kamary [who has now graduated], Kerrie Mengersen, and Judith Rousseau on the final day of 2018. The main reason for this massive delay is mine’s, as I got fairly depressed by the general tone of the dozen of reviews we received after submitting the paper as a Read Paper in the Journal of the Royal Statistical Society. Despite a rather opposite reaction from the community (an admittedly biased sample!) including two dozens of citations in other papers. (There seems to be a pattern in my submissions of Read Papers, witness our earlier and unsuccessful attempt with Christophe Andrieu in the early 2000’s with the paper on controlled MCMC, leading to 121 citations so far according to G scholar.) Anyway, thanks to my co-authors keeping up the fight!, we started working on a revision including stronger convergence results, managing to show that the approach leads to an optimal separation rate, contrary to the Bayes factor which has an extra √log(n) factor. This may sound paradoxical since, while the Bayes factor  converges to 0 under the alternative model exponentially quickly, the convergence rate of the mixture weight α to 1 is of order 1/√n, but this does not mean that the separation rate of the procedure based on the mixture model is worse than that of the Bayes factor. On the contrary, while it is well known that the Bayes factor leads to a separation rate of order √log(n) in parametric models, we show that our approach can lead to a testing procedure with a better separation rate of order 1/√n. We also studied a non-parametric setting where the null is a specified family of distributions (e.g., Gaussians) and the alternative is a Dirichlet process mixture. Establishing that the posterior distribution concentrates around the null at the rate √log(n)/√n. We thus resubmitted the paper for publication, although not as a Read Paper, with hopefully more luck this time!

barbed WIREs

Posted in Books, Kids, University life with tags , , , , , , on July 14, 2018 by xi'an


Maybe childishly, I am fairly unhappy with the way the submission of our Accelerating MCMC review was handled by WIREs Computational Statistics, i.e., Wiley, at the production stage. For some reason, or another, I sent the wrong bibTeX file with my LaTeX document [created using the style file imposed by WIREs]. Rather than pointing out the numerous missing entries, the production staff started working on the paper and sent us a proof with an endless list of queries related to these missing references. When I sent back the corrected LaTeX and bibTeX files, it answered back that it was too late to modify the files as it would “require re-work of [the] already processed paper which is also not a standard process for the journal”. Meaning in clearer terms that Wiley does not want to pay any additional time spent on this paper and that I have to provide from my own “free” time to make up for this mess…

on the Jeffreys-Lindley’s paradox (revision)

Posted in Statistics, University life with tags , , , , , , , , , on September 17, 2013 by xi'an

As mentioned here a few days ago, I have been revising my paper on the Jeffreys-Lindley’s paradox paper for Philosophy of Science. It came as a bit of a (very pleasant) surprise that this journal was ready to consider a revised version of the paper given that I have no formal training in philosophy and that the (first version of the) paper was rather hurriedly made of a short text written for the 95th birthday of Dennis Lindley and of my blog post on Aris Spanos’ “Who should be afraid of the Jeffreys-Lindley paradox?“, recently published in Philosophy of Science.  So I found both reviewers very supportive and I am grateful for their suggestions to improve both the scope and the presentation of the paper. It has been resubmitted and rearXived, and I am now waiting for the decision of the editorial team with the appropriate philosophical sense of detachment…