**I**n the past weeks I have received and read several papers (and X validated entries)where the Bayes factor is used to compare priors. Which does not look right to me, not on the basis of my general dislike of Bayes factors!, but simply because this seems to clash with the (my?) concept of Bayesian model choice and also because data should not play a role in that situation, from being used to select a *prior*, hence at least twice to run the inference, to resort to a *single* parameter value (namely the one behind the data) to decide between two distributions, to having no asymptotic justification, to eventually favouring the prior concentrated on the maximum likelihood estimator. And more. But I fear that this reticence to test for prior adequacy also extends to the prior predictive, or Box’s p-value, namely the probability under this prior predictive to observe something “more extreme” than the current observation, to quote from David Spiegelhalter.

## Archive for using the data twice

## leave Bayes factors where they once belonged

Posted in Statistics with tags Bayes factors, Bayesian Analysis, Bayesian decision theory, cross validated, prior comparison, prior predictive, prior selection, The Bayesian Choice, The Beatles, using the data twice, xkcd on February 19, 2019 by xi'an## Posterior predictive p-values and the convex order

Posted in Books, Statistics, University life with tags Andrew Gelman, arXiv, Bayesian p-values, DIC, posterior predictive, uniformity, University of Bristol, using the data twice, warhammer, Xiao-Li Meng on December 22, 2014 by xi'an**P**atrick Rubin-Delanchy and Daniel Lawson [of Warhammer fame!] recently arXived a paper we had discussed with Patrick when he visited Andrew and I last summer in Paris. The topic is the evaluation of the posterior predictive probability of a larger discrepancy between data and model

which acts like a Bayesian p-value of sorts. I discussed several times the reservations I have about this notion on this blog… Including running one experiment on the uniformity of the ppp while in Duke last year. One item of those reservations being that it evaluates the posterior probability of an event that does not exist a priori. Which is somewhat connected to the issue of using the data “twice”.

“A posterior predictive p-value has a transparent Bayesian interpretation.”

Another item that was suggested [to me] in the current paper is the difficulty in defining the posterior predictive (pp), for instance by including latent variables

which reminds me of the multiple possible avatars of the BIC criterion. The question addressed by Rubin-Delanchy and Lawson is how far from the uniform distribution stands this pp when the model is correct. The main result of their paper is that any sub-uniform distribution can be expressed as a particular posterior predictive. The authors also exhibit the distribution that achieves the bound produced by Xiao-Li Meng, Namely that

where *P* is the above (top) probability. (Hence it is uniform up to a factor 2!) Obviously, the proximity with the upper bound only occurs in a limited number of cases that do not validate the overall use of the ppp. But this is certainly a nice piece of theoretical work.

## ABC model choice by random forests

Posted in pictures, R, Statistics, Travel, University life with tags ABC, ABC model choice, arXiv, Asian lady beetle, CART, classification, DIYABC, machine learning, model posterior probabilities, Montpellier, posterior predictive, random forests, SNPs, using the data twice on June 25, 2014 by xi'an**A**fter more than a year of collaboration, meetings, simulations, delays, switches, visits, more delays, more simulations, discussions, and a final marathon wrapping day last Friday, Jean-Michel Marin, Pierre Pudlo, and I at last completed our latest collaboration on ABC, with the central arguments that (a) using random forests is a good tool for choosing the most appropriate model and (b) evaluating the posterior misclassification error rather than the posterior probability of a model is an appropriate paradigm shift. The paper has been co-signed with our population genetics colleagues, Jean-Marie Cornuet and Arnaud Estoup, as they provided helpful advice on the tools and on the genetic illustrations and as they plan to include those new tools in their future analyses and DIYABC software. ABC model choice via random forests is now arXived and very soon to be submitted…

**O**ne scientific reason for this fairly long conception is that it took us several iterations to understand the intrinsic nature of the random forest tool and how it could be most naturally embedded in ABC schemes. We first imagined it as a filter from a set of summary statistics to a subset of significant statistics (hence the automated ABC advertised in some of my past or future talks!), with the additional appeal of an associated distance induced by the forest. However, we later realised that (a) further ABC steps were counterproductive once the model was selected by the random forest and (b) including more summary statistics was always beneficial to the performances of the forest and (c) the connections between (i) the true posterior probability of a model, (ii) the ABC version of this probability, (iii) the random forest version of the above, were at best very loose. The above picture is taken from the paper: it shows how the true and the ABC probabilities (do not) relate in the example of an MA(q) model… We thus had another round of discussions and experiments before deciding the unthinkable, namely to give up the attempts to approximate the posterior probability in this setting and to come up with another assessment of the uncertainty associated with the decision. This led us to propose to compute a posterior predictive error as the error assessment for ABC model choice. This is mostly a classification error but (a) it is based on the ABC posterior distribution rather than on the prior and (b) it does not require extra-computations when compared with other empirical measures such as cross-validation, while avoiding the sin of using the data twice!

## re-read paper

Posted in Books, Statistics, Travel, University life with tags conference, DIC, integrated likelihood, Murray Aitkin, Newcastle-upon-Tyne, Read paper, RSS, using the data twice on September 3, 2013 by xi'an**T**oday, I attended the RSS Annual Conference in Newcastle-upon-Tyne. For one thing, I ran a Memorial session in memory of George Casella, with my (and his) friends Jim Hobert and Elias Moreno as speakers. (The session was well-attended if not overwhelmingly so.) For another thing, the RSS decided to have the DIC Read Paper by David Spiegelhalter, Nicky Best, Brad Carlin and Angelika van der Linde *Bayesian measures of model complexity and fit* re-Read, and I was asked to re-discuss the 2002 paper. Here are the slides of my discussion, borrowing from the 2006 Bayesian Analysis paper with Gilles Celeux, Florence Forbes, and Mike Titterington where we examined eight different versions of DIC for mixture models. (I refrained from using the title “snow white and the seven DICs” for a slide…) I also borrowed from our recent discussion of Murray Aitkin’s (2009) book. The other discussant was Elias Moreno, who focussed on consistency issues. *(More on this and David Spiegelhalter’s defence in a few posts!)* This was the first time I was giving a talk on a basketball court (I once gave an exam there!)

## on using the data twice…

Posted in Books, Statistics, University life with tags Bayes factor, Bayesian data analysis, Bayesian foundations, Bayesian model choice, Bayesian tests, CHANCE, Error and Inference, The Bayesian Choice, using the data twice on January 13, 2012 by xi'an**A**s I was writing my next column for ** CHANCE**, I decided I will include a methodology box about

*“using the data twice”*. Here is the draft. (The second part is reproduced

*verbatim*from an earlier post on

*.)*

**Error and Inference**

Several aspects of the books covered in thisreview [i.e.,CHANCE, andBayesian ideas and data analysis] face the problem ofBayesian modeling using WinBUGS“using the data twice”. What does that mean? Nothing really precise, actually. The accusation of “using the data twice” found in the Bayesian literature can be thrown at most procedures exploiting the Bayesian machinery without actually being Bayesian, i.e.~which cannot be derived from the posterior distribution. For instance, the integrated likelihood approach in Murray Aitkin’savoids the difficulties related with improper priors πStatistical Inference_{i}byfirstusing the data x to construct (proper) posteriors π_{i}(θ_{i}|x) and thensecondlyusing the data in a Bayes factoras if the posteriors were priors. This obviously solves the improperty difficulty (see. e.g.,

), but it creates a statistical procedure outside the Bayesian domain, hence requiring a separate validation since the usual properties of Bayesian procedures do not apply. Similarly, the whole empirical Bayes approach falls under this category, even though some empirical Bayes procedures are asymptotically convergent. The pseudo-marginal likelihood of Geisser and Eddy (1979), used inThe Bayesian Choice, is defined byBayesian ideas and data analysisthrough the marginal posterior likelihoods. While it also allows for improper priors, it does use the same data in each term of the product and, again, it is not a Bayesian procedure.

Once again, from first principles, a Bayesian approach should use the data only once, namely when constructing the posterior distribution on everyunknowncomponent of the model(s). Based on this all-encompassing posterior, all inferential aspects should be the consequences of a sequence of decision-theoretic steps in order to select optimal procedures. This is the ideal setting while, in practice, relying on asequenceof posterior distributions is often necessary, each posterior being a consequence of earlier decisions, which makes it the result of a multiple (improper) use of the data… For instance, the process of Bayesian variable selection is on principle clean from the sin of“using the data twice”: one simply computes the posterior probability of each of the variable subsets and this is over. However, in a case involving many (many) variables, there are two difficulties: one is about building the prior distributions for all possible models, a task that needs to be automatised to some extent; another is about exploring the set of potential models. First, ressorting to projection priors as in the intrinsic solution of Pèrez and Berger (2002,Biometrika, a much valuable article!), while unavoidable and a “least worst” solution, means switching priors/posteriors based on earlier acceptances/rejections, i.e. on the data. Second, the path of models truly explored by a computational algorithm[which will be a minuscule subset of the set of all models]will depend on the models rejected so far, either when relying on a stepwise exploration or when using a random walk MCMC algorithm. Although this is not crystal clear(there is actually plenty of room for supporting the opposite view!), it could be argued that the data is thus used several times in this process…

## Error and Inference [#3]

Posted in Books, Statistics, University life with tags ABC, Bayes factor, Bayesian inference, book review, Error-Statistical philosophy, Evidence and Evolution, fractional Bayes factor, frequentist inference, Karl Popper, Neyman-Pearson, R.A. Fisher, simultaneous equations model, statistical inference, testing of hypotheses, using the data twice, variable selection on September 14, 2011 by xi'an*(This is the third post on Error and Inference, yet again being a raw and naïve reaction to a linear reading rather than a deeper and more informed criticism.)*

“Statistical knowledge is independent of high-level theories.”—A. Spanos, p.242,, 2010Error and Inference

**T**he sixth chapter of * Error and Inference* is written by Aris Spanos and deals with the issues of testing in econometrics. It provides on the one hand a fairly interesting entry in the history of economics and the resistance to data-backed theories, primarily because the buffers between data and theory are multifold (“

*huge gap between economic theories and the available observational data*“, p.203). On the other hand, what I fail to understand in the chapter is the meaning of theory, as it seems very distinct from what I would call a (statistical) model. The sentence “s

*tatistical knowledge, stemming from a statistically adequate model allows data to `have a voice of its own’ (…) separate from the theory in question and its succeeds in securing the frequentist goal of objectivity in theory testing*” (p.206) is puzzling in this respect. (Actually, I would have liked to see a clear meaning put to this “voice of its own”, as it otherwise sounds mostly as a catchy sentence…) Similarly, Spanos distinguishes between three types of models: primary/theoretical, experimental/structural: “

*the structural model contains a theory’s substantive subject matter information in light of the available data*” (p.213), data/statistical: “t

*he statistical model is built exclusively using the information contained in the data*” (p.213). I have trouble to understand how testing can distinguish between those types of models: as a naïve reader, I would have thought that only the statistical model could be tested by a statistical procedure, even though I would not call the above a proper definition of a statistical model (esp. since Spanos writes a few lines below that the statistical model “

*would embed (nest) the structural model in its context*” (p.213)). The normal example followed on pages 213-217 does not help

*[me]*to put sense to this distinction: it simply illustrates the impact of failing some of the defining assumptions (normality, time homogeneity [in mean and variance], independence). (As an aside, the discussion about the poor estimation of the correlation p.214-215 does not help, because it involves a second variable Y that is not defined for this example.) It would be nice of course if the “noise” in a statistical/econometric model could be studied in complete separation from the structure of this model, however they seem to be irremediably intermingled to prevent this partition of roles. I thus do not see how the “statistically adequate model is independent from the substantive information” (p.217), i.e. by which rigorous process one can isolate the “chance” parts of the data to build and validate a statistical model

*per se*. The simultaneous equation model (SEM, pp.230-231) is more illuminating of the distinction set by Spanos between structural and statistical models/parameters, even though the difference in this case boils down to a question of identifiability. Continue reading

## Error and Inference [#2]

Posted in Books, Statistics, University life with tags ABC, Bayes factor, Bayesian inference, book review, Error-Statistical philosophy, Evidence and Evolution, fractional Bayes factor, frequentist inference, Karl Popper, Neyman-Pearson, R.A. Fisher, statistical inference, testing of hypotheses, using the data twice, variable selection on September 8, 2011 by xi'an*(This is the second post on ** Error and Inference*,

*again being a raw and naive reaction to a linear reading rather than a deeper and more informed criticism.)*

“Allan Franklin once gave a seminar under the title `Ad Hocis not a four letter word.'”—J. Worrall, p.130,, 2010Error and Inference

**T**he fourth chapter of * Error and Inference*, written by John Worrall, covers the highly interesting issue of “using the data twice”. The point has been debated several times on Andrew’s blog and this is one of the main criticisms raised against Aitkin’s posterior/integrated likelihood. Worrall’s perspective is both related and unrelated to this purely statistical issue, when he considers that “you can’t use the same fact twice, once in the construction of a theory and then again in its support” (p.129). (He even signed a “UN Charter”, where UN stands for “use novelty”!) After reading both Worrall’s and Mayo’s viewpoints, the later being that all that matters is severe testing as it encompasses the UN perspective (if I understood correctly), I afraid I am none the wiser, but this led me to reflect on the statistical issue. Continue reading