## contemporary issues in hypothesis testing

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on May 3, 2016 by xi'an

Next Fall, on 15-16 September, I will take part in a CRiSM workshop on hypothesis testing. In our department in Warwick. The registration is now open [until Sept 2] with a moderate registration free of £40 and a call for posters. Jim Berger and Joris Mulder will both deliver a plenary talk there, while Andrew Gelman will alas give a remote talk from New York. (A terrific poster by the way!)

## STAN trailer [PG+53]

Posted in Kids, R, Statistics, University life with tags , , , , on August 14, 2015 by xi'an

[Heading off to mountainous areas with no Internet or phone connection, I posted a series of entries for the following week, starting with this brilliant trailer of Michael:]

## also sprach Nietzsche

Posted in Books, Kids, pictures with tags , , , , , , on March 30, 2015 by xi'an

## off to New York

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , on March 29, 2015 by xi'an

I am off to New York City for two days, giving a seminar at Columbia tomorrow and visiting Andrew Gelman there. My talk will be about testing as mixture estimation, with slides similar to the Nice ones below if slightly upgraded and augmented during the flight to JFK. Looking at the past seminar speakers, I noticed we were three speakers from Paris in the last fortnight, with Ismael Castillo and Paul Doukhan (in the Applied Probability seminar) preceding me. Is there a significant bias there?!

## Statistics done wrong [book review]

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , on March 16, 2015 by xi'an

no starch press (!) sent me the pdf version of this incoming book, Statistics done wrong, by Alex Reinhart, towards writing a book review for CHANCE, and I read it over two flights, one from Montpellier to Paris last week, and from Paris to B’ham this morning. The book is due to appear on March 16. It expands on a still existing website developed by Reinhart. (Discussed a year or so away on Andrew’s blog, most in comments, witness Andrew’s comment below.) Reinhart who is, incidentally or not, is a PhD candidate in statistics at Carnegie Mellon University. After apparently a rather consequent undergraduate foray into physics. Quite an unusual level of maturity and perspective for a PhD student..!

“It’s hard for me to evaluate because I am so close to the material. But on first glance it looks pretty reasonable to me.” A. Gelman

Overall, I found myself enjoying reading the book, even though I found the overall picture of the infinitely many mis-uses of statistics rather grim and a recipe for despairing of ever setting things straight..! Somehow, this is an anti-textbook, in that it warns about many ways of applying the right statistical technique in the wrong setting, without ever describing those statistical techniques. Actually without using a single maths equation. Which should be a reason good enough for me to let all hell break loose on that book! But, no, not really, I felt no compunction about agreeing with Reinhart’s warning and if you have reading Andrew’s blog for a while you should feel the same…

“Then again for a symptom like spontaneous human combustion you might get excited about any improvement.” A. Reinhart (p.13)

Maybe the limitation in the exercise is that statistics appears so much fraught with dangers of over-interpretation and false positive and that everyone (except physicists!) is bound to make such invalidated leaps in conclusion, willingly or not, that it sounds like the statistical side of Gödel’s impossibility theorem! Further, the book moves from recommendation at the individual level, i.e., on how one should conduct an experiment and separate data for hypothesis building from data for hypothesis testing, to a universal criticism of the poor standards of scientific publishing and the unavailability of most datasets and codes. Hence calling for universal reproducibility protocols that reminded of the directions explored in this recent book I reviewed on that topic. (The one the rogue bird did not like.) It may be missing on the bright side of things, for instance the wonderful possibility to use statistical models to produce simulated datasets that allow for an evaluation of the performances of a given procedure in the ideal setting. Which would have helped the increasingly depressed reader in finding ways of checking how wrongs things could get..! But also on the dark side, as it does not say much about the fact that a statistical model is most presumably wrong. (Maybe a physicist’s idiosyncrasy!) There is a chapter entitled Model Abuse, but all it does is criticise stepwise regression and somehow botches the description of Simpson’s paradox.

“You can likely get good advice in exchange for some chocolates or a beer or perhaps coauthorship on your next paper.” A. Reinhart (p.127)

The final pages are however quite redeeming in that they acknowledge that scientists from other fields cannot afford a solid enough training in statistics and hence should hire statisticians as consultants for the data collection, analysis and interpretation of their experiments. A most reasonable recommendation!

## accelerating Metropolis-Hastings algorithms by delayed acceptance

Posted in Books, Statistics, University life with tags , , , , , , , , on March 5, 2015 by xi'an

Marco Banterle, Clara Grazian, Anthony Lee, and myself just arXived our paper “Accelerating Metropolis-Hastings algorithms by delayed acceptance“, which is an major revision and upgrade of our “Delayed acceptance with prefetching” paper of last June. Paper that we submitted at the last minute to NIPS, but which did not get accepted. The difference with this earlier version is the inclusion of convergence results, in particular that, while the original Metropolis-Hastings algorithm dominates the delayed version in Peskun ordering, the later can improve upon the original for an appropriate choice of the early stage acceptance step. We thus included a new section on optimising the design of the delayed step, by picking the optimal scaling à la Roberts, Gelman and Gilks (1997) in the first step and by proposing a ranking of the factors in the Metropolis-Hastings acceptance ratio that speeds up the algorithm.  The algorithm thus got adaptive. Compared with the earlier version, we have not pursued the second thread of prefetching as much, simply mentioning that prefetching and delayed acceptance could be merged. We have also included a section on the alternative suggested by Philip Nutzman on the ‘Og of using a growing ratio rather than individual terms, the advantage being the probability of acceptance stabilising when the number of terms grows, with the drawback being that expensive terms are not always computed last. In addition to our logistic and mixture examples, we also study in this version the MALA algorithm, since we can postpone computing the ratio of the proposals till the second step. The gain observed in one experiment is of the order of a ten-fold higher efficiency. By comparison, and in answer to one comment on Andrew’s blog, we did not cover the HMC algorithm, since the preliminary acceptance step would require the construction of a proxy to the acceptance ratio, in order to avoid computing a costly number of derivatives in the discretised Hamiltonian integration.

## Posterior predictive p-values and the convex order

Posted in Books, Statistics, University life with tags , , , , , , , , , on December 22, 2014 by xi'an

Patrick Rubin-Delanchy and Daniel Lawson [of Warhammer fame!] recently arXived a paper we had discussed with Patrick when he visited Andrew and I last summer in Paris. The topic is the evaluation of the posterior predictive probability of a larger discrepancy between data and model

$\mathbb{P}\left( f(X|\theta)\ge f(x^\text{obs}|\theta) \,|\,x^\text{obs} \right)$

which acts like a Bayesian p-value of sorts. I discussed several times the reservations I have about this notion on this blog… Including running one experiment on the uniformity of the ppp while in Duke last year. One item of those reservations being that it evaluates the posterior probability of an event that does not exist a priori. Which is somewhat connected to the issue of using the data “twice”.

“A posterior predictive p-value has a transparent Bayesian interpretation.”

Another item that was suggested [to me] in the current paper is the difficulty in defining the posterior predictive (pp), for instance by including latent variables

$\mathbb{P}\left( f(X,Z|\theta)\ge f(x^\text{obs},Z^\text{obs}|\theta) \,|\,x^\text{obs} \right)\,,$

which reminds me of the multiple possible avatars of the BIC criterion. The question addressed by Rubin-Delanchy and Lawson is how far from the uniform distribution stands this pp when the model is correct. The main result of their paper is that any sub-uniform distribution can be expressed as a particular posterior predictive. The authors also exhibit the distribution that achieves the bound produced by Xiao-Li Meng, Namely that

$\mathbb{P}(P\le \alpha) \le 2\alpha$

where P is the above (top) probability. (Hence it is uniform up to a factor 2!) Obviously, the proximity with the upper bound only occurs in a limited number of cases that do not validate the overall use of the ppp. But this is certainly a nice piece of theoretical work.