Oxford University Press sent me this book by Phyllis Illari and Frederica Russo, Causality (Philosophical theory meets scientific practice) a
little while ago. (The book appeared in 2014.) Unless I asked for it, I cannot remember…
“The problem is whether and how to use information of general causation established in science to ascertain individual responsibility.” (p.38)
As the subtitle indicates, this is a philosophy book, not a statistics book. And not particularly intended for statisticians. Hence, I am not exactly qualified to analyse its contents, and even less to criticise its lack of connection with statistics. But this being a blog post… I read rather slowly through the book, which exposes a wide range (“a map”, p.8) of approaches and perspectives on the notions of causality, some ways to infer about causality, and the point of doing all this, concluding with a relativistic (and thus eminently philosophical) viewpoint defending a “pluralistic mosaic” or a “causal mosaic” that relates to all existing accounts of causality as they “each do something valuable” (p.258). From a naïve bystander perspective, this sounds like a new avatar of deconstructionism applied to causality.
“Simulations can be very illuminating about various phenomena that are complex and have unexpected effects (…) can be run repeatedly to study a system in different situations to those seen for the real system…” (p.15)
This is not to state that the book is uninteresting, as it provides a wide entry into philosophical attempts at categorising and defining causality, if not into the statistical aspects of the issue. (For instance, the problem whether or not causality can be proven uniquely from a statistical perspective is not mentioned.) Among those interesting points in the early chapters, a section (2.5) about simulation. Which however misses the depth of this earlier book on climate simulations I reviewed while in Monash. Or of the discussions at the interdisciplinary seminar last year in Hanover. I.J. Good’s probabilistic causality is mentioned but hardly detailed. (With the warning remark that one “should not confuse predictability with determinism [and] determinism with causality”, p.82.)
“Validity has been discussed in relation to statistical properties of a model, or the possibility of replicating the study.” (p.63)
I was also looking forward the entries on counterfactuals but did not find the corresponding chapter (Chap. 9) that enlightening. And the following chapters, covering notions like manipulation, production accounts, or capacities, mostly escaped me. In particular because they were missing convincing scientific illustrations. Only when Hume’s perspective got discussed (Chap. 15) did I get back on track. With the reflection that the intent was less on the notion of causality itself, which may well remain behind our reach, and more on establishing causality, i.e., a methodology based on evidence (whatever that means!) and regularity. A chapter that also exposed Hume as a frequentist in this respect! Chapter 17 about action could expose some connections between causality and decision-theory in that causality only exists with respect to what we want to do with it. Leading also to the novel [to me!] notion of inferentialism, which may be seen as anti-realistic in its most extreme versions.
“It is difficult to overestimate the importance of probability and statistics to science.” (p.75)
As to be expected in a philosophy book, a great deal of material is dedicated to the notion of “truth”, its existence, its use and whether or not it can be uncovered. Chapter 7 for instance insists on the operational alternative of models, but immediately states that they must be validated (which “is not the same thing as truth”, p.63). Unclear how this can be done in a general sense. Another question that came to me while reading the book was whether or not causality (like probability?) could be defined for a one-time phenomenon. From a conceptual viewpoint, I see no difficulty in that, if E happened, it because C had happened before. From a demonstration viewpoint, it is less clear because the experiment cannot be repeated. Unless one can use simulation and a formal (prior) model. Which would distinguish a frequentist causality from a Bayesian causality?
“Philosophers of science know how difficult it is to reconstruct the process of hypothesis generation.” (p.55)
There are a few statistics entries, besides the predictable correlation is not causation!, like Simpson’s paradox and Bayes’ nets, but what I find missing, beside the above point of whether or not causality can at all be ascertained, is a deep enough (statistical) criticism of the notion of model. While there exists a chapter entitled Truth or models (Chap. 21), it does not reach a compelling conclusion on how to reach “truth” from a given (false) model. (Inference and evidence are not to be understood there in their statistical sense.) The authors reproduce Box’s meme that models are all false but may be useful. Without truly ascertaining what useful means.
“If each tradition has made a good case for allowing their favoured relata, then we can react by accepting them all.” (p.233)
The last section is in my opinion the most interesting and I should have read it before the other sections. The authors mention in its preface that this covers the methodology behind their other book “Causality in the Sciences“. The discussion therein is quite productive in that it led me to reflect further on those issues, rather than reading others’ theories as in the earlier chapters. Once again, it is more about methodology than causality per se, but nonetheless remains pertinent to scientific practice. And to statistical inference. As stated at the beginning of this review, I find the all inclusive conclusion [or lack thereof] of the book quite puzzling, but this may be the case with most philosophy books…