Archive for Bayesian hypothesis testing

Bertrand-Borel debate

Posted in Books, Statistics with tags , , , , , , , , , , , , , on May 6, 2019 by xi'an

On her blog, Deborah Mayo briefly mentioned the Bertrand-Borel debate on the (in)feasibility of hypothesis testing, as reported [and translated] by Erich Lehmann. A first interesting feature is that both [starting with] B mathematicians discuss the probability of causes in the Bayesian spirit of Laplace. With Bertrand considering that the prior probabilities of the different causes are impossible to set and then moving all the way to dismiss the use of probability theory in this setting, nipping the p-values in the bud..! And Borel being rather vague about the solution probability theory has to provide. As stressed by Lehmann.

“The Pleiades appear closer to each other than one would naturally expect. This statement deserves thinking about; but when one wants to translate the phenomenon into numbers, the necessary ingredients are lacking. In order to make the vague idea of closeness more precise, should we look for the smallest circle that contains the group? the largest of the angular distances? the sum of squares of all the distances? the area of the spherical polygon of which some of the stars are the vertices and which contains the others in its interior? Each of these quantities is smaller for the group of the Pleiades than seems plausible. Which of them should provide the measure of implausibility? If three of the stars form an equilateral triangle, do we have to add this circumstance, which is certainly very unlikely apriori, to those that point to a cause?” Joseph Bertrand (p.166)

 

“But whatever objection one can raise from a logical point of view cannot prevent the preceding question from arising in many situations: the theory of probability cannot refuse to examine it and to give an answer; the precision of the response will naturally be limited by the lack of precision in the question; but to refuse to answer under the pretext that the answer cannot be absolutely precise, is to place oneself on purely abstract grounds and to misunderstand the essential nature of the application of mathematics.” Emile Borel (Chapter 4)

Another highly interesting objection of Bertrand is somewhat linked with his conditioning paradox, namely that the density of the observed unlikely event depends on the choice of the statistic that is used to calibrate the unlikeliness, which makes complete sense in that the information contained in each of these statistics and the resulting probability or likelihood differ to an arbitrary extend, that there are few cases (monotone likelihood ratio) where the choice can be made, and that Bayes factors share the same drawback if they do not condition upon the entire sample. In which case there is no selection of “circonstances remarquables”. Or of uniformly most powerful tests.

mixture modelling for testing hypotheses

Posted in Books, Statistics, University life with tags , , , , , , , , , , on January 4, 2019 by xi'an

After a fairly long delay (since the first version was posted and submitted in December 2014), we eventually revised and resubmitted our paper with Kaniav Kamary [who has now graduated], Kerrie Mengersen, and Judith Rousseau on the final day of 2018. The main reason for this massive delay is mine’s, as I got fairly depressed by the general tone of the dozen of reviews we received after submitting the paper as a Read Paper in the Journal of the Royal Statistical Society. Despite a rather opposite reaction from the community (an admittedly biased sample!) including two dozens of citations in other papers. (There seems to be a pattern in my submissions of Read Papers, witness our earlier and unsuccessful attempt with Christophe Andrieu in the early 2000’s with the paper on controlled MCMC, leading to 121 citations so far according to G scholar.) Anyway, thanks to my co-authors keeping up the fight!, we started working on a revision including stronger convergence results, managing to show that the approach leads to an optimal separation rate, contrary to the Bayes factor which has an extra √log(n) factor. This may sound paradoxical since, while the Bayes factor  converges to 0 under the alternative model exponentially quickly, the convergence rate of the mixture weight α to 1 is of order 1/√n, but this does not mean that the separation rate of the procedure based on the mixture model is worse than that of the Bayes factor. On the contrary, while it is well known that the Bayes factor leads to a separation rate of order √log(n) in parametric models, we show that our approach can lead to a testing procedure with a better separation rate of order 1/√n. We also studied a non-parametric setting where the null is a specified family of distributions (e.g., Gaussians) and the alternative is a Dirichlet process mixture. Establishing that the posterior distribution concentrates around the null at the rate √log(n)/√n. We thus resubmitted the paper for publication, although not as a Read Paper, with hopefully more luck this time!

a question from McGill about The Bayesian Choice

Posted in Books, pictures, Running, Statistics, Travel, University life with tags , , , , , , , on December 26, 2018 by xi'an

I received an email from a group of McGill students working on Bayesian statistics and using The Bayesian Choice (although the exercise pictured below is not in the book, the closest being exercise 1.53 inspired from Raiffa and Shlaiffer, 1961, and exercise 5.10 as mentioned in the email):

There was a question that some of us cannot seem to decide what is the correct answer. Here are the issues,

Some people believe that the answer to both is ½, while others believe it is 1. The reasoning for ½ is that since Beta is a continuous distribution, we never could have θ exactly equal to ½. Thus regardless of α, the probability that θ=½ in that case is 0. Hence it is ½. I found a related stack exchange question that seems to indicate this as well.

The other side is that by Markov property and mean of Beta(a,a), as α goes to infinity , we will approach ½ with probability 1. And hence the limit as α goes to infinity for both (a) and (b) is 1. I think this also could make sense in another context, as if you use the Bayes factor representation. This is similar I believe to the questions in the Bayesian Choice, 5.10, and 5.11.

As it happens, the answer is ½ in the first case (a) because π(H⁰) is ½ regardless of α and 1 in the second case (b) because the evidence against H⁰ goes to zero as α goes to zero (watch out!), along with the mass of the prior on any compact of (0,1) since Γ(2α)/Γ(α)². (The limit does not correspond to a proper prior and hence is somewhat meaningless.) However, when α goes to infinity, the evidence against H⁰ goes to infinity and the posterior probability of ½ goes to zero, despite the prior under the alternative being more and more concentrated around ½!

a glaring mistake

Posted in Statistics with tags , , , , , , on November 28, 2018 by xi'an

Someone posted this question about Bayes factors in my book on Saturday morning and I could not believe the glaring typo pointed out there had gone through the centuries without anyone noticing! There should be no index 0 or 1 on the θ’s in either integral (or indices all over). I presume I made this typo when cutting & pasting from the previous formula (which addressed the case of two point null hypotheses), but I am quite chagrined that I sabotaged the definition of the Bayes factor for generations of readers of the Bayesian Choice. Apologies!!!

practical Bayesian inference [book review]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , on April 26, 2018 by xi'an

[Disclaimer: I received this book of Coryn Bailer-Jones for a review in the International Statistical Review and intend to submit a revised version of this post as my review. As usual, book reviews on the ‘Og are reflecting my own definitely personal and highly subjective views on the topic!]

It is always a bit of a challenge to review introductory textbooks as, on the one hand, they are rarely written at the level and with the focus one would personally choose to write them. And, on the other hand, it is all too easy to find issues with the material presented and the way it is presented… So be warned and proceed cautiously! In the current case, Practical Bayesian Inference tries to embrace too much, methinks, by starting from basic probability notions (that should not be unknown to physical scientists, I believe, and which would avoid introducing a flat measure as a uniform distribution over the real line!, p.20). All the way to running MCMC for parameter estimation, to compare models by Bayesian evidence, and to cover non-parametric regression and bootstrap resampling. For instance, priors only make their apparition on page 71. With a puzzling choice of an improper prior (?) leading to an improper posterior (??), which is certainly not the smoothest entry on the topic. “Improper posteriors are a bad thing“, indeed! And using truncation to turn them into proper distributions is not a clear improvement as the truncation point will significantly impact the inference. Discussing about the choice of priors from the beginning has some appeal, but it may also create confusion in the novice reader (although one never knows!). Even asking about “what is a good prior?” (p.73) is not necessarily the best (and my recommended) approach to a proper understanding of the Bayesian paradigm. And arguing about the unicity of the prior (p.119) clashes with my own view of the prior being primarily a reference measure rather than an ideal summary of the available information. (The book argues at some point that there is no fixed model parameter, another and connected source of disagreement.) There is a section on assigning priors (p.113), but it only covers the case of a possibly biased coin without much realism. A feature common to many Bayesian textbooks though. To return to the issue of improper priors (and posteriors), the book includes several warnings about the danger of hitting an undefined posterior (still called a distribution), without providing real guidance on checking for its definition. (A tough question, to be sure.)

“One big drawback of the Metropolis algorithm is that it uses a fixed step size, the magnitude of which can hardly be determined in advance…”(p.165)

When introducing computational techniques, quadratic (or Laplace) approximation of the likelihood is mingled with kernel estimators, which does not seem appropriate. Proposing to check convergence and calibrate MCMC via ACF graphs is helpful in low dimensions, but not in larger dimensions. And while warning about the dangers of forgetting the Jacobians in the Metropolis-Hastings acceptance probability when using a transform like η=ln θ is well-taken, the loose handling of changes of variables may be more confusing than helpful (p.167). Discussing and providing two R codes for the (standard) Metropolis algorithm may prove too much. Or not. But using a four page R code for fitting a simple linear regression with a flat prior (pp.182-186) may definitely put the reader off! Even though I deem the example a proper experiment in setting a Metropolis algorithm and appreciate the detailed description around the R code itself. (I just take exception at the paragraph on running the code with two or even one observation, as the fact that “the Bayesian solution always exists” (p.188) [under a proper prior] is not necessarily convincing…)

“In the real world we cannot falsify a hypothesis or model any more than we “truthify” it (…) All we can do is ask which of the available models explains the data best.” (p.224)

In a similar format, the discussion on testing of hypotheses starts with a lengthy presentation of classical tests and p-values, the chapter ending up with a list of issues. Most of them reasonable in my own referential. I also concur with the conclusive remarks quoted above that what matters is a comparison of (all relatively false) models. What I less agree [as predictable from earlier posts and papers] with is the (standard) notion that comparing two models with a Bayes factor follows from the no information (in order to avoid the heavily loaded non-informative) prior weights of ½ and ½. Or similarly that the evidence is uniquely calibrated. Or, again, using a truncated improper prior under one of the assumptions (with the ghost of the Jeffreys-Lindley paradox lurking nearby…).  While the Savage-Dickey approximation is mentioned, the first numerical resolution of the approximation to the Bayes factor is via simulations from the priors. Which may be very poor in the situation of vague and uninformative priors. And then the deadly harmonic mean makes an entry (p.242), along with nested sampling… There is also a list of issues about Bayesian model comparison, including (strong) dependence on the prior, dependence on irrelevant alternatives, lack of goodness of fit tests, computational costs, including calls to possibly intractable likelihood function, ABC being then mentioned as a solution (which it is not, mostly).

Continue reading

Can we have our Bayesian cake and eat it too?

Posted in Books, pictures, Statistics, University life with tags , , , , , , on January 17, 2018 by xi'an

This paper aims at solving the Bartlett-Lindley-Jeffreys paradox, i.e., the difficulty connected with improper priors in Bayes factors. The introduction is rather lengthy since by page 9 we are still (dis-)covering the Lindley paradox, along with the introduction of a special notation for -2 times the logarithm of the Bayes factor.

“We will now resolve Lindley’s paradox in both of the above examples.”

The “resolution” of the paradox stands in stating the well-known consistency of the Bayes factor, i.e., that as the sample grows to infinity it goes to infinity (almost surely) under the null hypothesis and to zero under the alternative (almost surely again, both statements being for fixed parameters.) Hence the discrepancy between a small p-value and a Bayes factor favouring the null occurs “with vanishingly small” probability. (The authors distinguish between Bartlett’s paradox associated with a prior variance going to infinity [or a prior becoming improper] and Lindley-Jeffreys’ paradox associated with a sample size going to infinity.)

“We construct cake priors using the following ingredients”

The “cake” priors are defined as pseudo-normal distributions, pseudo in the sense that they look like multivariate Normal densities, except for the covariance matrix that also depends on the parameter, as e.g. in the Fisher information matrix. This reminds me of a recent paper of Ronald Gallant in the Journal of Financial Econometrics that I discussed. With the same feature. Except for a scale factor inversely log-proportional to the dimension of the model. Now, what I find most surprising, besides the lack of parameterisation invariance, is that these priors are not normalised. They do no integrate to one. As to whether or not they integrate, the paper keeps silent about this. This is also a criticism I addressed to Gallant’s paper, getting no satisfactory answer. This is a fundamental shortcoming of the proposed cake priors…

“Hence, the relative rates that g⁰ and g¹ diverge must be considered”

The authors further argue (p.12) that by pushing the scale factors to infinity one produces the answer the Jeffreys prior would have produced. This is not correct since the way the scale factors diverge, relative to one another, drives the numerical value of the limit! Using inversely log-proportionality in the dimension(s) of the model(s) is a correct solution, from a mathematical perspective. But only from a mathematical perspective.

“…comparing the LRT and Bayesian tests…”

Since the log-Bayes factor is the log-likelihood ratio modulo the ν log(n) BIC correction, it is not very surprising that both approaches reach close answers when the scale goes to infinity and the sample size n as well. In the end, there seems to be no reason for going that path other than making likelihood ratio and Bayes factor asymptotically coincide, which does not sound like a useful goal to me. (And so does recovering BIC in the linear model.)

“No papers in the model selection literature, to our knowledge, chose different constants for each model under consideration.”

In conclusion, the paper sets up a principled or universal way to cho<a href=”https://academic.oup.com/jfec/article-abstract/14/2/265/1751312?redirectedFrom=fulltext”></a><a href=”https://xiaose “cake” priors fighting Lindley-Jeffreys’ paradox, but the choices made therein remain arbitrary. They allow for a particular limit to be found when the scale parameter(s) get to infinity, but the limit depends on the connection created between the models, which should not share parameters if one is to be chosen. (The discussion of using improper priors and arbitrary constants is aborted, resorting to custom arguments as the above.) The paper thus unfortunately does not resolve Lindley-Jeffreys’ paradox and the vexing issue of improper priors unfit for testing.

abandon all o(p) ye who enter here

Posted in Books, Statistics, University life with tags , , , , , , on September 28, 2017 by xi'an

Today appeared on arXiv   a joint paper by Blakeley McShane, David Gal, Andrew Gelman, Jennifer Tackett, and myself, towards the abandonment of significance tests, which is a response to the 72 author paper in Nature Methods that recently made the news (and comments on the ‘Og). Some of these comments have been incorporated in the paper, along with others more on the psychology testing side. From the irrelevance of point null hypotheses to the numerous incentives for multiple comparisons, to the lack of sufficiency of the p-value itself, to the limited applicability of the uniformly most powerful prior principle…

“…each [proposal] is a purely statistical measure that fails to take a more holistic view of the evidence that includes the consideration of the traditionally neglected factors, that is, prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain.”

One may wonder about this list of grievances and its impact on statistical practice. The paper however suggests two alternatives, one being to investigate the potential impact of (neglected) factors rather than relying on thresholds. Another one, maybe less realistic, unless it is the very same, is to report the entirety of the data associated with the experiment. This makes the life of journal editors and grant evaluators harder, possibly much harder, but it indeed suggests an holistic and continuous approach to data analysis, rather than the mascarade of binary outputs. (Not surprisingly, posting this item of news on Andrew’s blog a few hours ago generated a large amount of discussion.)