## O’Bayes 19/3

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on July 2, 2019 by xi'an

Nancy Reid gave the first talk of the [Canada] day, in an impressive comparison of all approaches in statistics that involve a distribution of sorts on the parameter, connected with the presentation she gave at BFF4 in Harvard two years ago, including safe Bayes options this time. This was related to several (most?) of the talks at the conference, given the level of worry (!) about the choice of a prior distribution. But the main assessment of the methods still seemed to be centred on a frequentist notion of calibration, meaning that epistemic interpretations of probabilities and hence most of Bayesian answers were disqualified from the start.

In connection with Nancy’s focus, Peter Hoff’s talk also concentrated on frequency valid confidence intervals in (linear) hierarchical models. Using prior information or structure to build better and shrinkage-like confidence intervals at a given confidence level. But not in the decision-theoretic way adopted by George Casella, Bill Strawderman and others in the 1980’s. And also making me wonder at the relevance of contemplating a fixed coverage as a natural goal. Above, a side result shown by Peter that I did not know and which may prove useful for Monte Carlo simulation.

Jaeyong Lee worked on a complex model for banded matrices that starts with a regular Wishart prior on the unrestricted space of matrices, computes the posterior and then projects this distribution onto the constrained subspace. (There is a rather consequent literature on this subject, including works by David Dunson in the past decade of which I was unaware.) This is a smart demarginalisation idea but I wonder a wee bit at the notion as the constrained space has measure zero for the larger model. This could explain for the resulting posterior not being a true posterior for the constrained model in the sense that there is no prior over the constrained space that could return such a posterior. Another form of marginalisation paradox. The crux of the paper is however about constructing a functional form of minimaxity. In his discussion of the paper, Guido Consonni provided a representation of the post-processed posterior (P³) that involves the Dickey-Savage ratio, sort of, making me more convinced of the connection.

As a lighter aside, one item of local information I should definitely have broadcasted more loudly and long enough in advance to the conference participants is that the University of Warwick is not located in ye olde town of Warwick, where there is no university, but on the outskirts of the city of Coventry, but not to be confused with the University of Coventry. Located in Coventry.

## Bayes, reproducibility and the Quest for Truth

Posted in Books, Statistics, University life with tags , , , , , on April 27, 2017 by xi'an

Don Fraser, Mylène Bédard, and three coauthors have written a paper with the above dramatic title in Statistical Science about the reproducibility of Bayesian inference in the framework of what they call a mathematical prior. Connecting with the earlier quick-and-dirty tag attributed by Don to Bayesian credible intervals.

“We provide simple (…) counter-examples to general claims that Bayes can offer accuracy for statistical inference. To obtain this accuracy with Bayes, more effort is required compared to recent likelihood methods (…) [and] accuracy beyond first order is routinely not available (…) An alternative is to view default Bayes as an exploratory technique and then ask does it do as it overtly claims? Is it reproducible as understood in contemporary science? (…) No one has answers although speculative claims abound.” (p. 1)

The early stages of the paper questions the nature of a prior distribution in terms of objectivity and reproducibility, which strikes me as a return to older debates on the nature of probability. And of a dubious insistence on the reality of a prior when the said reality is customarily and implicitly assumed for the sampling distribution. While we “can certainly ask how [a posterior] quantile relates to the true value of the parameter”, I see no compelling reason why the associated quantile should be endowed with a frequentist coverage meaning, i.e., be more than a normative indication of the deviation from the true value. (Assuming there is such a parameter.) To consider that the credible interval of interest can be “objectively” assessed by simulation experiments evaluating its coverage is thus doomed from the start (since there is not reason for the nominal coverage) and situated on the wrong plane since it stems from the hypothetical frequentist model for a range of parameter values. Instead I find simulations from (generating) models useful in a general ABC sense, namely by producing realisations from the predictive one can assess at which degree of roughness the data is compatible with the formal construct. To bind reproducibility to the frequentist framework thus sounds wrong [to me] as being model-based. In other words, I do not find the definition of reproducibility used in the paper to be objective (literally bouncing back from Gelman and Hennig Read Paper)

At several points in the paper, the legal consequences of using a subjective prior are evoked as legally binding and implicitly as dangerous. With the example of the L’Aquila expert trial. I have trouble seeing the relevance of this entry as an adverse lawyer is as entitled to attack the expert on her or his sampling model. More fundamentally, I feel quite uneasy about bringing this type of argument into the debate!

## improved approximate-Bayesian model-choice method for estimating shared evolutionary history [reply from the author]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , on June 3, 2014 by xi'an

[Here is a very kind and detailed reply from Jamie Oakes to the comments I made on his ABC paper a few days ago:]

First of all, many thanks for your thorough review of my pre-print! It is very helpful and much appreciated. I just wanted to comment on a few things you address in your post.

I am a little confused about how my replacement of continuous uniform probability distributions with gamma distributions for priors on several parameters introduces a potentially crippling number of hyperparameters. Both uniform and gamma distributions have two parameters. So, the new model only has one additional hyperparameter compared to the original msBayes model: the concentration parameter on the Dirichlet process prior on divergence models. Also, the new model offers a uniform prior over divergence models (though I don’t recommend it).

Your comment about there being no new ABC technique is 100% correct. The model is new, the ABC numerical machinery is not. Also, your intuition is correct, I do not use the divergence times to calculate summary statistics. I mention the divergence times in the description of the ABC algorithm with the hope of making it clear that the times are scaled (see Equation (12)) prior to the simulation of the data (from which the summary statistics are calculated). This scaling is simply to go from units proportional to time, to units that are proportional to the expected number of mutations. Clearly, my attempt at clarity only created unnecessary opacity. I’ll have to make some edits.

Regarding the reshuffling of the summary statistics calculated from different alignments of sequences, the statistics are not exchangeable. So, reshuffling them in a manner that is not conistent across all simulations and the observed data is not mathematically valid. Also, if elements are exchangeable, their order will not affect the likelihood (or the posterior, barring sampling error). Thus, if our goal is to approximate the likelihood, I would hope the reshuffling would also have little affect on the approximate posterior (otherwise my approximation is not so good?).

You are correct that my use of “bias” was not well defined in reference to the identity line of my plots of the estimated vs true probability of the one-divergence model. I think we can agree that, ideally (all assumptions are met), the estimated posterior probability of a model should estimate the probability that the model is correct. For large numbers of simulation
replicates, the proportion of the replicates for which the one-divergence model is true will approximate the probability that the one-divergence model is correct. Thus, if the method has the desirable (albeit “frequentist”) behavior such that the estimated posterior probability of the one-divergence model is an unbiased estimate of the probability that the one-divergence model is correct, the points should fall near the identity line. For example, let us say the method estimates a posterior probability of 0.90 for the one-divergence model for 1000 simulated datasets. If the method is accurately estimating the probability that the one-divergence model is the correct model, then the one-divergence model should be the true model for approximately 900 of the 1000 datasets. Any trend away from the identity line indicates the method is biased in the (frequentist) sense that it is not correctly estimating the probability that the one-divergence model is the correct model. I agree this measure of “bias” is frequentist in nature. However, it seems like a worthwhile goal for Bayesian model-choice methods to have good frequentist properties. If a method strongly deviates from the identity line, it is much more difficult to interpret the posterior probabilites that it estimates. Going back to my example of the posterior probability of 0.90 for 1000 replicates, I would be alarmed if the model was true in only 100 of the replicates.

My apologies if my citation of your PNAS paper seemed misleading. The citation was intended to be limited to the context of ABC methods that use summary statistics that are insufficient across the models under comparison (like msBayes and the method I present in the paper). I will definitely expand on this sentence to make this clearer in revisions. Thanks!

Lastly, my concluding remarks in the paper about full-likelihood methods in this domain are not as lofty as you might think. The likelihood function of the msBayes model is tractable, and, in fact, has already been derived and implemented via reversible-jump MCMC (albeit, not readily available yet). Also, there are plenty of examples of rich, Kingman-coalescent models implemented in full-likelihood Bayesian frameworks. Too many to list, but a lot of them are implemented in the BEAST software package. One noteworthy example is the work of Bryant et al. (2012, Molecular Biology and Evolution, 29(8), 1917–32) that analytically integrates over all gene trees for biallelic markers under the coalescent.

## ABC in 1984

Posted in Statistics with tags , , , , on November 9, 2009 by xi'an

“Bayesian statistics and Monte Carlo methods are ideally suited to the task of passing many models over one dataset” D. Rubin, Annals of Statistics, 1984

Jean-Louis Foulley sent me a 1984 paper by Don Rubin that details in no uncertain terms the accept-reject algorithm at the core of the ABC algorithm! Namely,

Generate $\theta\sim\pi(\theta)$;
Generate $x\sim f(x|\theta)$;
Accept $\theta$ if $x=x_0$

Obviously, ABC goes further by replacing the acceptance step with the tolerance condition

$d(x,x_0) < \epsilon$

but this early occurence is worth noticing nonetheless. It is also interesting to see that Don Rubin does not promote this simulation method in situations where the likelihood is not available but rather as an intuitive way to understanding posterior distributions from a frequentist perspective, because $\theta$‘s from the posterior are those that could have generated the observed data. (The issue of the zero probability of the exact equality between simulated and observed data is not dealt with in the paper, maybe because the notion of a “match” between simulated and observed data is not clearly defined.) Apart from this historical connection, I recommend the entire paper as providing a very compelling argument for practical Bayesianism!