beyond subjective and objective in Statistics

“At the level of discourse, we would like to move beyond a subjective vs. objective shouting match.” (p.30)

This paper by Andrew Gelman and Christian Hennig calls for the abandonment of the terms objective and subjective in (not solely Bayesian) statistics. And argue that there is more than mere prior information and data to the construction of a statistical analysis. The paper is articulated as the authors’ proposal, followed by four application examples, then a survey of the philosophy of science perspectives on objectivity and subjectivity in statistics and other sciences, next to a study of the subjective and objective aspects of the mainstream statistical streams, concluding with a discussion on the implementation of the proposed move.

“…scientists and the general public celebrate the brilliance and inspiration of greats such as Einstein, Darwin, and the like, recognizing the roles of their personalities and individual experiences in shaping their theories and discoveries” (p.2)

I do not see the relevance of this argument, in that the myriad of factors leading, say, Marie Curie or Rosalind Franklin to their discoveries are more than subjective, as eminently personal and the result of unique circumstance, but the corresponding theories remain within a common and therefore objective corpus of scientific theories. Hence I would not equate the derivation of statistical estimators or even less the computation of statistical estimates to the extension or negation of existing scientific theories by scientists.

“We acknowledge that the “real world” is only accessible to human beings through observation, and that scientific observation and measurement cannot be independent of human preconceptions and theories.” (p.4)

The above quote reminds me very much of Poincaré‘s

“It is often said that experiments should be made without preconceived ideas. That is impossible. Not only would it make every experiment fruitless, but even if we wished to do so, it could not be done. Every man has his own conception of the world, and this he cannot so easily lay aside.” Henri Poincaré, La Science et l’Hypothèse

The central proposal of the paper is to replace `objective’ and `subjective’ with less value-loaded and more descriptive terms. Given that very few categories of statisticians take pride in their subjectivity, apart from a majority of Bayesians, but rather use the term as derogatory for other categories, I fear the proposal stands little chance to see this situation resolved. Even though I agree we should move beyond this distinction that does not reflect the complexity and richness of statistical practice. As the discussion in Section 2 makes it clear, all procedures involve subjective choices and calibration (or tuning), either plainly acknowledged or hidden under the carpet. Which is why I would add (at least) two points to the virtues of subjectivity:

  1. Spelling out unverifiable assumptions about the data production;
  2. Awareness of calibration of tuning parameters.

while I do not see consensus as necessarily a virtue. The following examples in Section 3 are all worth considering as they bring more details, albeit in specific contexts, to the authors’ arguments. Most of them give the impression that the major issue stands with the statistical model itself, which may be both the most acute subjectivity entry in statistical analyses and the least discussed one. Including the current paper, where e.g. Section 3.4 wants us to believe that running a classical significance test is objective and apt to detect an unfit model. And the hasty dismissal of machine learning in Section 6 is disappointing, because one thing machine learning does well is to avoid leaning too much on the model, using predictive performances instead. Furthermore, apart from Section 5.3, I actually see little in the paper about the trial-and-error way of building a statistical model and/or analysis, while subjective inputs from the operator are found at all stages of this construction and should be spelled out rather than ignored (and rejected).

“Yes, Bayesian analysis can be expressed in terms of subjective beliefs, but it can also be applied to other settings that have nothing to do with beliefs.” (p.31)

The survey in Section 4 about what philosophy of sciences says about objectivity and subjectivity is quite thorough, as far as I can judge, but does not expand enough the issue of “default” or all-inclusive statistical solutions, used through “point-and-shoot” software by innumerate practitioners in mostly inappropriate settings, with the impression of conducting “the” statistical analysis. This false feeling of “the” proper statistical analysis and its relevance for this debate also transpire through the treatment of statistical expertises by media and courts. I also think we could avoid the Heisenberg principle to be mentioned in this debate, as it does not really contribute anything useful. More globally, the exposition of a large range of notions of objectivity is as often the case in philosophy not conclusive and I feel nothing substantial c28omes out of it… And that it is somehow antagonistic with the notion of a discussion paper, since every possible path has already been explored. Even forking ones. As a non-expert in philosophy, I would not feel confident in embarking upon a discussion on what realism is and is not.

“the subjectivist Bayesian point of view (…) can be defended for honestly acknowledging that prior information often does not come in ways that allow a unique formalization” (p.25)

When going through the examination of the objectivity of the major streams of statistical analysis, I get the feeling of exploring small worlds (in Lindley‘s words) rather than the entire spectrum of statistical methodologies. For instance, frequentism seems to be reduced to asymptotics, while completely missing the entire (lost?) continent of non-parametrics. (Which should not be considered to be “more” objective, but has the advantage of loosening the model specification.) While the error-statistical (frequentist) proposal of Mayo (1996) seems to consume a significant portion [longer than the one associated with the objectivist Bayesianism section] of the discussion with respect to its quite limited diffusion within statistical circles. From a Bayesian perspective, the discussions of subjective, objective, and falsificationist Bayes do not really bring a fresh perspective to the debate between those three branches, apart from suggesting we should give up such value loaded categorisations. As an O-Bayes card-carrying member, I find the characterisation of the objectivist branch somehow restrictive, by focussing solely on Jaynesmaxent solution. Hence missing the corpus of work on creating priors with guaranteed frequentist or asymptotic properties. Like matching priors. I also find the defence of the falsificationist perspective, i.e. of Gelman and Shalizi (2013) both much less critical and quite extensive, in that, again, this is not what one could call a standard approach to statistics. Resulting in an implicit (?) message that this may the best way to proceed.

In conclusion, on the positive side [for there is a positive side!], the paper exposes the need to spell out the various inputs (from the operator) leading to a statistical analysis, both for replicability or reproducibility, and for “objectivity” purposes, although solely conscious choices and biases can be uncovered this way. It also reinforces the call for model awareness, by which I mean a critical stance on all modelling inputs, including priors!, a disbelief that any model is true, applying to statistical procedures Popper’s critical rationalism. This has major consequences on Bayesian modelling in that, as advocated in Gelman and Shalizi (2013) , as well as Evans (2015), sampling and prior models should be given the opportunity to be updated when they are inappropriate for the data at hand. On the negative side, I fear the proposal is far too idealistic in that most users (and some makers) of statistics cannot spell out their assumptions and choices, being unaware of those. This is in a way [admitedly, with gross exaggeration!] the central difficulty with statistics that almost anyone anywhere can produce an estimate or a p-value without ever being proven wrong. It is therefore difficult to perceive how the epistemological argument therein [that objective versus subjective is a meaningless opposition] is going to profit statistical methodology, even assuming the list of Section 2.3 was to be made compulsory. The eight deadly sins listed in the final section would require expert reviewers to vanish from publication (and by expert, I mean expert in statistical methodology), while it is almost never the case that journals outside our field make a call to statistics experts when refereeing a paper. Apart from banning all statistics arguments from a journal, I am afraid there is no hope for a major improvement in that corner…keep-stats-subjective

All in all, the authors deserve a big thank for making me reflect upon those issues and (esp.) back their recommendation for reproducibility, meaning not only the production of all conscious choices made in the construction process, but also through the posting of (true or pseudo-) data and of relevant code for all publications involving a statistical analysis.

11 Responses to “beyond subjective and objective in Statistics”

  1. Absurd, Orwellian, destructive, deceptive, self-serving, misrepresentations and framing of nearly every scientific principle discussed and basic rational human reasoning itself — all the while exercising every form of research bias and logical fallacy known to man.

    The are a hundred ways to expose this kind of madness, so I just grabbed a propaganda model off Wikipedia that should suffice. Everything begins with properly aligned motivations for ‘scientific’ research, not your feelings to be accepted into ‘objective’ scientific inquiry.

    “We were motivated to write the present paper because we felt that our applied work, and that of others, was impeded because of the conventional framing of certain statistical analyses as subjective. It seemed to us that, rather than being in opposition, subjectivity and objectivity both had virtues that were relevant in making decisions about statistical analyses. We have earlier noted (Gelman and O’Rourke, 2015) that statisticians typically choose their procedures based on non-statistical criteria, and philosophical traditions and even the labels attached to particular concepts can affect real-world practice.”

    “The epistemic merit model is a method for understanding propaganda conceived by Sheryl Tuttle Ross…Ross developed the Epistemic merit model due to concern about narrow, misleading definitions of propaganda.

    To appropriately discuss propaganda, Ross argues that one must consider a threefold communication model: that of Sender-Message-Receiver. “That is… propaganda involve[s]… the one who is persuading (Sender) [who is] doing so intentionally, [the] target for such persuasion (Receiver) and [the] means of reaching that target (Message).” There are four conditions for a message to be considered propaganda. Propaganda involves the intention to persuade. As well, propaganda is sent on behalf of a sociopolitical institution, organization, or cause. Next, the recipient of propaganda is a socially significant group of people. Finally, propaganda is an epistemic struggle to challenge others’ thoughts.

    Ross claims that it is misleading to say that propaganda is simply false, or that it is conditional to a lie, since often the propagandist believes in what he/she is propagandizing. In other words, it is not necessarily a lie if the person who creates the propaganda is trying to persuade you of a view that they actually hold. “The aim of the propagandist is to create the semblance of credibility.” This means that they appeal to an epistemology that is weak or defective.

    False statements, bad arguments, immoral commands as well as inapt metaphors (and other literary tropes) are the sorts of things that are epistemically defective… Not only does epistemic defectiveness more accurately describe how propaganda endeavors to function… since many messages are in forms such as commands that do not admit to truth-values, [but it] also accounts for the role context plays in the workings of propaganda.

    Throughout history those who have wished to persuade have used art to get their message out. This can be accomplished by hiring artists for the express aim of propagandizing or by investing new meanings to a previously non-political work. Therefore, Ross states, it is important to consider “the conditions of its making [and] the conditions of its use.”

  2. […] other viewpoints, check out the blog posts by Xian and Nick Horton, as well as this interview on […]

  3. Christian Hennig Says:

    I’d be very curious actually, whether, when re-reading Sec. 3.4, you’d admit that what you wrote about it is a misrepresentation, or whether you believe indeed that we somehow implicitly say what you claimed we say, despite us not having written anything explicit of this kind?
    Chances are we still have a chance to make amendments before publication so if some wording of us suggests what you apparently believe you found in our text, we can change it… but honestly, my current subjective probability that you got it just wrong is quite high.

    • Dear Christian, I acknowledge that this point was overstated, given that you “explicitate” some choices related with your procedure, if not all. Esp. when stating that “there is no point in arguing that our significance test is more objective than…” (p.13). I still find the argument of non-significance of the test statistic = no real evidence for clustering unconvincing.

      • Christian Hennig Says:

        Well, “no real evidence” only states the absence of something, and I’d be surprised if in the informations given by us (including the original 2013 paper) you could find evidence in favour.

        The real issue is whether what we did was a good (or even “as good as possible”) attempt to find evidence if there was something to be found. I’m curious whether you’d have a better idea.

      • I was not trying to outsmart you, far from it!, but my point was to stress the reliance on a “standard” testing method that was itself the result of choices and decisions.

      • Christian Hennig Says:

        Sorry, actually in this example and the 2013 paper there was evidence for clustering; examples where indeed no evidence was found are in Hennig and Lin (2015).

  4. Christian Hennig Says:

    Thanks for your long discussion. You make some valuable points, and certainly you are right that there are many issues on which much more could be said (the paper is quite long already, though).
    Also it’s certainly true that subjective forces are at work when it comes to the issue of how much space we devote to which stream in statistics.

    However, I’d like to correct a few things. It is a misrepresentation to write that “Section 3.4 wants us to believe that running a classical significance test is objective”. Read Section 3.4 again and realise that we say clearly there that what is done should *not* be called “objective” and neither is it really “classical”. The “philosophies” in Section 6 including “machine learning” are not “dismissed”, just less emphasis, focus and space is given to their discussion (discussing all these, various substreams of O-Bayes etc. in detail would make a book, not a paper).
    Last (and least;-)) I’d appreciate if you could get my name right, Hennig not Henning.

  5. > I would not feel confident in embarking upon a discussion on what realism is and is not.

    That may not be as hard as you think as the authors primarily quote Chang “We are therefore “active scientific realists” in the sense of Chang (2012), who writes: “I take reality as whatever is not subject to one’s will, and knowledge as an ability to act without being frustrated by resistance from reality.”

    Fortunately Chang, H. (2012). Is Water H2O? Evidence, Realism and Pluralism. Dordrecht: Springer. is more sciency than tough guy philosophy and there is a self-contained chapter on scientific realism that is fairly readable – Active Realism and the Reality…

    Now am I biased as Chang cites CS Peirce and a Peirce sholar for most of the insight.

    Keith O’Rourke

    • Thank you, Keith: I will give this book a try, once my current pile of books removed. And there is nothing wrong in being biased! Or ex-subjective!!!

Leave a reply to Christian Hennig Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.