## paradoxes in scientific inference

**T**his CRC Press book was sent to me for review in CHANCE: *Paradoxes in Scientific Inference* is written by Mark Chang, vice-president of AMAG Pharmaceuticals. The topic of scientific paradoxes is one of my primary interests and I have learned a lot by looking at Lindley-Jeffreys and Savage-Dickey paradoxes. However, I did not find a renewed sense of excitement when reading the book. The very first (and maybe the best!) paradox with *Paradoxes in Scientific Inference* is that it is a book from the future! Indeed, its copyright year is 2013 (!), although I got it a few months ago. (Not mentioning here the cover mimicking Escher’s “paradoxical” pictures with dices. A sculpture due to Shigeo Fukuda and apparently not quoted in the book. As I do not want to get into another dice cover polemic, I will abstain from further comments!)

**N**ow, getting into a deeper level of criticism (!), I find the book very uneven and overall quite disappointing. (Even missing in its statistical foundations.) Esp. given my initial level of excitement about the topic!

**F**irst, there is a tendency to turn *everything* into a paradox: obviously, when writing a book about paradoxes, everything looks like a paradox! This means bringing into the picture every paradox known to man and then some, i.e., things that are either un-paradoxical (e.g., Gödel’s incompleteness result) or uninteresting in a scientific book (e.g., the birthday paradox, which may be surprising but is far from a paradox!). Fermat’s theorem is also quoted as a paradox, even though there is nothing in the text indicating in which sense it is a paradox. (Or is it because it is simple to express, hard to prove?!) Similarly, Brownian motion is considered a paradox, as “*reconcil[ing] the paradox between two of the greatest theories of physics (…): thermodynamics and the kinetic theory of gases*” (p.51) For instance, the author considers the MLE being biased to be a paradox (p.117), while omitting the much more substantial “paradox” of the non-existence of unbiased estimators of most parameters—which simply means unbiasedness is irrelevant. Or the other even more puzzling “paradox” that the secondary MLE derived from the likelihood associated with the distribution of a primary MLE may differ from the primary. (My favourite!)

“*When the null hypothesis is rejected, the p-value is the probability of the type I error.*” *Paradoxes in Scientific Inference* (p.105)

“*The p-value is the conditional probability given H _{0}.” Paradoxes in Scientific Inference (p.106)*

**S**econd, the depth of the statistical analysis in the book is often found missing. For instance, Simpson’s paradox is not analysed from a statistical perspective, only reported as a fact. Sticking to statistics, take for instance the discussion of Lindley’s paradox. The author seems to think that the problem is with the different conclusions produced by the frequentist, likelihood, and Bayesian analyses (p.122). This is completely wrong: Lindley’s (or Lindley-Jeffreys‘s) paradox is about the lack of significance of Bayes factors based on improper priors. Similarly, when the likelihood ratio test is introduced, the reference threshold is given as equal to 1 and no mention is later made of compensating for different degrees of freedom/against over-fitting. The discussion about *p*-values is equally garbled, witness the above quote which (a) conditions upon the rejection and (b) ignores the dependence of the *p*-value on a realized random variable.

“The peaks of the likelihood function indicate (on average) something other than the distribution associated with the drawn sample. As such, how can we say the likelihood is evidence supporting the distribution?” Paradoxes in Scientific Inference (p.119)

**T**he chapter on statistical controversies actually focus on the opposition between frequentist, likelihood, and Bayesian paradigms. The author seems to have studied Mayo and Spanos’ * Error and Inference* to great lengths. (As I did, as I did!) He spends around twenty pages in Chapter 3 on this opposition and on the conditionality, sufficiency, and likelihood principles that were reunited by Birnbaum and recently deconstructed by Mayo. In my opinion, Chang makes a mess of describing the issues at stake in this debate and leaves the reader more bemused at the end than at the beginning of the chapter. For instance, the conditionality principle is confused with the

*p*-value being computed conditional on the null (hypothesis) model (p.110). Or the selected experiment being unknown (p.110). The likelihood function is considered as a sufficient statistic (p.137). The “paradox” of an absence of non-trivial sufficient statistics in all models but exponential families (the Pitman-Koopman lemma) is not mentioned. The fact that ancillary statistics bring information about the precision of a sufficient statistic is presented as a paradox (p.112). Having the

*same*physical parameter θ is confused with having the

*same*probability distribution indexed by θ, which is definitely not the

*same*thing (p.115)! The likelihood principle is confused with the likelihood ratio test (p.117) and with the maximum likelihood estimation (witness the above quote). The dismissal of Mayo’s rejection of Birnbaum’s proof—a rejection I fail to understand—is not any clearer: “

*her statement about the sufficient statistic under a mixed distribution (a fixed distribution) is irrelevant*” (p.138). This actually made me think of another interpretation of Mayo’s argument that could prove her right! More on that in another post.

*“From a single observation x from a normal distribution with unknown mean μ and standard deviation σ it is possible to create a confidence interval on μ with finite length.” Paradoxes in Scientific Inference (p.103)*

**O**ne of the first paradoxes in the statistics chapter is the one endorsed by the above quote. I found it intriguing that this interval could be of the form x±η|x| with η only depending on the confidence coverage… Then I checked and saw that the confidence coverage was defined by default, i.e., the actual coverage is at least the nominal coverage, which is much less exciting (and much less paradoxical).

“

One of the proudest accomplishments of my childhood was creating an electric bell, though later I found it was just a reinvention. Other reinventions I remember are discovering some of the interesting properties of the number 9 and the solution for a general quadratic equation.”Paradoxes in Scientific Inference(p.24)

**T**he book abounds in quotes like the above, where the author does not shy away from promoting himself. For instance, on page 2, he adds his own quotes to a list of aphorisms from major figures like Montaigne, Lao-Tzu, or Picasso. Take also the gem “*I will feel so rewarded if this book can help a young reader in some way to become a thinker*” (p.viii) The author further claims several times to bring a unification of the frequentist and Bayesian perspectives, even though I fail to see how he did it. E.g., “*whether frequentist or Bayesian, concepts of probability are based on the collection of similar phenomena or experiments*” (p.63) does not bring a particularly clear answer. Similarly, the murky discussion of the Monty Hall dilemma does not characterise the distinction between frequentist and Bayesian reasoning (if anything, this is a frequentist setting). A last illustration is the ‘paradox of posterior distributions’ (p.124) where Cheng got it plain wrong about the sequential update of a prior distribution *not* being equal to the final posterior (see, e.g., Section 1.4 in *The Bayesian Choice*). A nice quote is recycled from my book though (a completely irrelevant anecdote is that George Casella actually hated this quote!):

“If you believe anything happens (…) for a reason, then samples may never be independent, else there would be no randomness. Just as T. Hilberman [sic] put it (Robert 1994): “From where we stand, the rain seems random. If we could stand somewhere else, we would see the order in it.” Paradoxes in Scientific Inference (p.140)

**M**ost surprisingly, the book contains exercises in every chapter, whose purpose is lost on me. What is the point in asking to students “Write an essay on the role of the Barber’s Paradox in developing modern set theory” or “How does the story of Achilles and the tortoise address the issues of the sum of infinite numbers of arbitrarily small numbers”..?! Not to mention the top one: “*Can you think of any applications from what you have learned from this chapter?*” Erm…frankly, no!

December 22, 2012 at 2:45 am

Professor Christian Robert reviewed my book: “Paradoxes in Scientific Inference”. I found that the majority of his criticisms had no foundation and were based on his truncated way of reading. I gave point-by-point responses below. For clarity, I kept his original comments.

December 22, 2012 at 7:35 am

Because the response is hundreds of lines long, I have turned it into a full post, to appear soon.