**A**s I was writing my next column for ** CHANCE**, I decided I will include a methodology box about

*“using the data twice”*. Here is the draft. (The second part is reproduced

*verbatim*from an earlier post on

*.)*

**Error and Inference**

Several aspects of the books covered in thisreview [i.e.,CHANCE, andBayesian ideas and data analysis] face the problem ofBayesian modeling using WinBUGS“using the data twice”. What does that mean? Nothing really precise, actually. The accusation of “using the data twice” found in the Bayesian literature can be thrown at most procedures exploiting the Bayesian machinery without actually being Bayesian, i.e.~which cannot be derived from the posterior distribution. For instance, the integrated likelihood approach in Murray Aitkin’savoids the difficulties related with improper priors πStatistical Inference_{i}byfirstusing the data x to construct (proper) posteriors π_{i}(θ_{i}|x) and thensecondlyusing the data in a Bayes factoras if the posteriors were priors. This obviously solves the improperty difficulty (see. e.g.,

), but it creates a statistical procedure outside the Bayesian domain, hence requiring a separate validation since the usual properties of Bayesian procedures do not apply. Similarly, the whole empirical Bayes approach falls under this category, even though some empirical Bayes procedures are asymptotically convergent. The pseudo-marginal likelihood of Geisser and Eddy (1979), used inThe Bayesian Choice, is defined byBayesian ideas and data analysisthrough the marginal posterior likelihoods. While it also allows for improper priors, it does use the same data in each term of the product and, again, it is not a Bayesian procedure.

Once again, from first principles, a Bayesian approach should use the data only once, namely when constructing the posterior distribution on everyunknowncomponent of the model(s). Based on this all-encompassing posterior, all inferential aspects should be the consequences of a sequence of decision-theoretic steps in order to select optimal procedures. This is the ideal setting while, in practice, relying on asequenceof posterior distributions is often necessary, each posterior being a consequence of earlier decisions, which makes it the result of a multiple (improper) use of the data… For instance, the process of Bayesian variable selection is on principle clean from the sin of“using the data twice”: one simply computes the posterior probability of each of the variable subsets and this is over. However, in a case involving many (many) variables, there are two difficulties: one is about building the prior distributions for all possible models, a task that needs to be automatised to some extent; another is about exploring the set of potential models. First, ressorting to projection priors as in the intrinsic solution of Pèrez and Berger (2002,Biometrika, a much valuable article!), while unavoidable and a “least worst” solution, means switching priors/posteriors based on earlier acceptances/rejections, i.e. on the data. Second, the path of models truly explored by a computational algorithm[which will be a minuscule subset of the set of all models]will depend on the models rejected so far, either when relying on a stepwise exploration or when using a random walk MCMC algorithm. Although this is not crystal clear(there is actually plenty of room for supporting the opposite view!), it could be argued that the data is thus used several times in this process…