Archive for Evidence and Evolution

how to translate evidence into French?

Posted in Books, Statistics, University life with tags , , , , , on July 6, 2014 by xi'an

I got this email from Gauvain who writes a PhD in philosophy of sciences a few minutes ago:

L’auteur du texte que j’ai à traduire désigne les facteurs de Bayes comme une “Bayesian measure of evidence”, et les tests de p-value comme une “frequentist measure of evidence”. Je me demandais s’il existait une traduction française reconnue et établie pour cette expression de “measure of evidence”. J’ai rencontré parfois “mesure d’évidence” qui ressemble fort à un anglicisme, et parfois “estimateur de preuve”, mais qui me semble pouvoir mener à des confusions avec d’autres emploi du terme “estimateur”.

which (pardon my French!) wonders how to translate the term evidence into French. It would sound natural that the French évidence is the answer but this is not the case. Despite sharing the same Latin root (evidentia), since the English version comes from medieval French, the two words have different meanings: in English, it means a collection of facts coming to support an assumption or a theory, while in French it means something obvious, which truth is immediately perceived. Surprisingly, English kept the adjective evident with the same [obvious] meaning as the French évident. But the noun moved towards a much less definitive meaning, both in Law and in Science. I had never thought of the huge gap between the two meanings but must have been surprised at its use the first time I heard it in English. But does not think about it any longer, as when I reviewed Seber’s Evidence and Evolution.

One may wonder at the best possible translation of evidence into French. Even though marginal likelihood (vraisemblance marginale) is just fine for statistical purposes. I would suggest faisceau de présomptions or degré de soutien or yet intensité de soupçon as (lengthy) solutions. Soupçon could work as such, but has a fairly negative ring…

Philosophy of Science, a very short introduction (and review)

Posted in Books, Kids, Statistics, Travel with tags , , , , , , , , , , , on November 3, 2013 by xi'an

When visiting the bookstore on the campus of the University of Warwick two weeks ago, I spotted this book, Philosophy of Science, a very short introduction, by Samir Okasha, and the “bargain” offer of getting two books for £10 enticed me to buy it along with a Friedrich Nietzsche, a very short introduction… (Maybe with the irrational hope that my daughter would take a look at those for her philosophy course this year!)

Popper’s attempt to show that science can get by without induction does not succeed.” (p.23)

Since this is [unsusrprisingly!] a very short introduction, I did not get much added value from the book. Nonetheless, it was an easy read for short trips in the metro and short waits here and there. And would be a good [very short] introduction to any one newly interested in the philosophy of sciences. The first chapter tries to define what science is, with reference to the authority of Popper (and a mere mention of Wittgenstein), and concludes that there is no clear-cut demarcation between science and pseudo-science. (Mathematics apparently does not constitute a science: “Physics is the most fundamental science of all”, p.55) I would have liked to see the quote from Friedrich Nietzsche

“It is perhaps just dawning on five or six minds that physics, too, is only an interpretation and exegesis of the world (to suit us, if I may say so!) and not a world-explanation.”

in Beyond Good and Evil. as it illustrates the main point of the chapter and maybe the book that scientific theories can never be proven true, Plus, it is often misinterpreted as a anti-science statement by Nietzsche. (Plus, it links both books I bought!) Continue reading

Error and Inference [#3]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on September 14, 2011 by xi'an

(This is the third post on Error and Inference, yet again being a raw and naïve reaction to a linear reading rather than a deeper and more informed criticism.)

“Statistical knowledge is independent of high-level theories.”—A. Spanos, p.242, Error and Inference, 2010

The sixth chapter of Error and Inference is written by Aris Spanos and deals with the issues of testing in econometrics. It provides on the one hand a fairly interesting entry in the history of economics and the resistance to data-backed theories, primarily because the buffers between data and theory are multifold (“huge gap between economic theories and the available observational data“, p.203). On the other hand, what I fail to understand in the chapter is the meaning of theory, as it seems very distinct from what I would call a (statistical) model. The sentence “statistical knowledge, stemming from a statistically adequate model allows data to `have a voice of its own’ (…) separate from the theory in question and its succeeds in securing the frequentist goal of objectivity in theory testing” (p.206) is puzzling in this respect. (Actually, I would have liked to see a clear meaning put to this “voice of its own”, as it otherwise sounds mostly as a catchy sentence…) Similarly, Spanos distinguishes between three types of models: primary/theoretical, experimental/structural: “the structural model contains a theory’s substantive subject matter information in light of the available data” (p.213), data/statistical: “the statistical model is built exclusively using the information contained in the data” (p.213). I have trouble to understand how testing can distinguish between those types of models: as a naïve reader, I would have thought that only the statistical model could be tested by a statistical procedure, even though I would not call the above a proper definition of a statistical model (esp. since Spanos writes a few lines below that the statistical model “would embed (nest) the structural model in its context” (p.213)). The normal example followed on pages 213-217 does not help [me] to put sense to this distinction: it simply illustrates the impact of failing some of the defining assumptions (normality, time homogeneity [in mean and variance], independence). (As an aside, the discussion about the poor estimation of the correlation p.214-215 does not help, because it involves a second variable Y that is not defined for this example.) It would be nice of course if the “noise” in a statistical/econometric model could be studied in complete separation from the structure of this model, however they seem to be irremediably intermingled to prevent this partition of roles. I thus do not see how the “statistically adequate model is independent from the substantive information” (p.217), i.e. by which rigorous process one can isolate the “chance” parts of the data to build and validate a statistical model per se. The simultaneous equation model (SEM, pp.230-231) is more illuminating of the distinction set by Spanos between structural and statistical models/parameters, even though the difference in this case boils down to a question of identifiability. Continue reading

Error and Inference [#2]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , on September 8, 2011 by xi'an

(This is the second post on Error and Inference, again being a raw and naive reaction to a linear reading rather than a deeper and more informed criticism.)

“Allan Franklin once gave a seminar under the title `Ad Hoc is not a four letter word.'”—J. Worrall, p.130, Error and Inference, 2010

The fourth chapter of Error and Inference, written by John Worrall, covers the highly interesting issue of “using the data twice”. The point has been debated several times on Andrew’s blog and this is one of the main criticisms raised against Aitkin’s posterior/integrated likelihood. Worrall’s perspective is both related and unrelated to this purely statistical issue, when he considers that “you can’t use the same fact twice, once in the construction of a theory and then again in its support” (p.129). (He even signed a “UN Charter”, where UN stands for “use novelty”!) After reading both Worrall’s and Mayo’s viewpoints,  the later being that all that matters is severe testing as it encompasses the UN perspective (if I understood correctly), I afraid I am none the wiser, but this led me to reflect on the statistical issue. Continue reading

Error and Inference [#1]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on September 1, 2011 by xi'an

“The philosophy of science offer valuable tools for understanding and advancing solutions to the problems of evidence and inference in practice”—D. Mayo & A. Spanos, p.xiv, Error and Inference, 2010

Deborah Mayo kindly sent me her last book, whose subtitle is “Recent exchanges on experimental reasoning, reliability, and the objectivity and rationality of Science” and contributors are P. Achinstein, A. Chalmers, D. Cox, C. Glymour, L. Laudan, A. Musgrave, and J. Worrall, plus both editors, Deborah Mayo and Aris Spanos. Deborah Mayo rightly inferred that this debate was bound to appeal to my worries about the nature of testing and model choice and to my layman interest in the philosophy of Science. Speaking of which [layman], the book reads really well, even though I am missing references. And even though it cannot be read under my cherry tree (esp. now that weather has moved from été to étaumne… as I heard this morning on the national public radio) Deborah Mayo is clearly the driving force in putting this volume together, from setting the ERROR 06 conference to commenting the chapters of all contributors (but her own and Aris Spanos’). Her strongly frequentist perspective on the issues of testing and model choice are thus reflected in the overall tone of the volume, even though contributors bring some contradiction to the debate. (disclaimer: I found the comics below on Zoltan Dienes’s webpage. I however have no information nor opinion [yet] about the contents of the corresponding book.)

However, scientists wish to resist relativistic, fuzzy, or post-modern turns (…) Notably, the Popperian requirement that our theories are testable and falsifiable is widely regarded to contain important insights about responsibile science  and objectivity.—D. Mayo & A. Spanos, p.2, Error and Inference, 2010

Given the philosophical, complex, and interesting nature of the work, I will split my comments into several linear posts (hence the #1), as I did for Evidence and Evolution. The following comments are thus about a linear (even pedestrian) and incomplete read through the first three chapters.  Those comments are not  pretending at any depth, but simply reflect the handwritten notes and counterarguments I scribbled as I was reading through…  A complete book review was published in the Notre-Dame Philosophical Reviews. (Though, can you trust a review considering Sartre as a major philosopher?! At least, he appears as a counterpart to Bertrand Russell in the frontispiece of the review.) As illustrated by the above quote (which first part I obviously endorse), the overall perspective in the book is Popperian, despite Popper’s criticism of statistical inference as a whole and of Bayesian statistics as a particular (although Andrew would disagree). Another fundamental concept throughout the book is the “Error-Statistical philosophy” whose Deborah Mayo is the proponent. One of the tenets of this philosophy is a reliance on statistical significance tests in the Fisher-Neyman-Pearson (or frequentist) tradition, along with a severity principle (“We want hypotheses that will allow for stringent testing so that if they pass we have evidence of a genuine experimental effect“, p.19) stated as (p.22)

A hypothesis H passes a severe test T with data x is

  1. x agrees with H, and
  2. with very high probability, test T would have produced a result that accords less well with H than does x, if H were false or incorrect.

(The p-value is advanced as a direct accomplishment of this goal, but I fail to see why it does or why a Bayes factor would not. Indeed, the criterion depends on the definition of probability when H is false or incorrect. This relates to Mayo’s criticism of the Bayesian approach, as explained below.)

Formal error-statistical tests provide tools to ensure that errors will be correctly detected with high probabilities“—D. Mayo, p.33, Error and Inference, 2010

In Chapter 1, Deborah Mayo has a direct go at the Bayesian approach. The main criticism is about the Bayesian approach to testing (defined through the posterior probability of the hypothesis, rather than through the predictive) is about the catchall hypothesis, a somehow desultory term replacing the alternative hypothesis. According to Deborah Mayo, this alternative should “include all possible rivals, including those not even though of”  (p.37). This sounds like a weak argument, although it was also used by Alan Templeton in his rebuttal of ABC, given that (a) it should also apply in the frequentist sense, in order to define the probability distribution “when H is false or incorrect” (see, e.g., “probability of so good an agreement (between H and x) calculated under the assumption that H is false”, p.40); (b) a well-defined alternative should be available as testing an hypothesis is very rarely the end of the story: if H is rejected, there should/will be a contingency plan; (c) rejecting or accepting an hypothesis H in terms of the sole null hypothesis H does not make sense from operational as well as from game-theoretic perspectives. The further argument that the posterior probability of H is a direct function of the prior probability of H does not stand against the Bayes factor. (The same applies to the criticism that the Bayesian approach does not accommodate newcomers, i.e., new alternatives.) Stating that “one cannot vouch for the reliability of [this Bayesian] procedure—that it would rarely affirm theory T were T false” (p.37) completely ignores the wealth of results about the consistency of the Bayes factor (since the “asymptotic long run”, p.20, matters in the Error-Statistical philosophy). The final argument that Bayesians rank “theories that fit the data equally well (i.e., have identical likelihoods)” (p.38) does not account for (or dismisses, p.50, referring to Jeffreys and Berger instead of Jefferys and Berger) the fact that Bayes factors are automated Occam’s razors in that the averaging of the likelihoods over spaces of different dimensions are natural advocates of simpler models. Even though I plan to discuss this point in a second post, Deborah Mayo also seems to imply that Bayesians are using the data twice (this is how I interpret the insistance on same p. 50), which is a sin [genuine] Bayesian analysis can hardly be found guilty of!

As pointed out by Adam La Caze in Notre-Dame Philosophical Reviews:

An exchange on Bayesian philosophy of science or Bayesian statistics would have been a welcome addition and would have benefited the dual goals of the volume. Bayesian philosophy of science and Bayesian statistics are a constant foil to Mayo’s work, but neither approach is given much of a voice. An exchange on Bayesian philosophy of science is made all the more relevant by the strength of Mayo’s challenge to a Bayesian account of theory appraisal. A virtue of the error-statistical account is its ability to capture the kind of detailed arguments that scientists make about data and the methods they employ to arrive at reliable inferences. Mayo clearly thinks that Bayesians are unable to supplement their view with any sort of prospective account of such methods. This seems contrary to practice where scientists make similar methodological arguments whether they utilise frequentist or Bayesian approaches to statistical inference. Indeed, Bayesian approaches to study design and statistical inference play a significant (and increasing) role in many sciences, often alongside frequentist approaches (clinical drug development provides a prominent example). It would have been interesting to see what, if any, common ground could be reached on these approaches to the philosophy of science (even if very little common ground seems possible in terms of their competing approach to statistical inference).

Review in Human Genomics

Posted in Books, Statistics with tags , , on February 12, 2011 by xi'an

My review of Sober’s Evidence and Evolution: The Logic Behind the Science first polished on the ‘Og just got published in Human Genomics (vol. 5, number 2, pp. 130-136). This is my very first publication in this journal and I am very glad (and grateful to the book editor) to have had the opportunity to keep my review to its original seven pages in the journal. (Here is a copy on my webpage in case access to the journal is impossible.)

Seminar of philosophy [ex-post]

Posted in Books, pictures, Statistics with tags , , , , , on December 1, 2010 by xi'an

Yesterday, I gave my talk at the Seminar of philosophy of mathematics at Université Paris Diderot, in this new district of Paris where I always get lost because construction work continuously modifies the topology of the place. (This year, I ended up biking the mythical Rue Watt which has been beautifully renovated.) I managed nonetheless to get there in time and talked about Bayesian model choice and of the difficulties with Murray Aitkin’s proposal. The talk was presumably much too mathematical and not philosophical enough, but it was followed by a discussion launched by the two following speakers, Jan Sprenger and Bengt Autzen. Due to teaching duties, I could only attend the talk by Jan Sprenger, who covered the philosophical aspects of the difficulty in defining objective Bayesian inference, alas missing both Bengt’s and Steve Fienberg’s talks… He mostly focussed on MaxEnt priors, with an interesting counterexample by Teddy Seidenfeld, but also mentioned reference priors as suffering from the same difficulties. From my (non-philosophical) perspective, I consider that MaxEnt priors are beyond in terms of objectivity, because they first require the definition of a reference measure for the (entropy) divergence to be defined. During the talk, Jan also mentioned the book In Defence of Objective Bayesianism by Jon Williamson, which I will try to read (and comment) in the coming months. I just had a few words with who told me he had worked on Seber’s Evidence and Evolution as part of his PhD thesis, so I wished we had had more time to chat about that! (Steve has proposed to give his talk at the students’ seminar here in CREST so that we can discuss effects of causes versus causes of effects.)