Archive for climate change
This book, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, by Arthur C. Petersen, was sent to me twice by the publisher for reviewing it for CHANCE. As I could not find a nearby “victim” to review the book, I took it with me to Australia and read it by bits and pieces along the trip.
“Models are never perfectly reliable, and we are always faced with ontic uncertainty and epistemic uncertainty, including epistemic uncertainty about ontic uncertainty.” (page 53)
The author, Arthur C. Petersen, was a member of the United Nations’ Intergovernmental Panel on Climate Change (IPCC) and works as chief scientist at the PBL Netherlands Environmental Assessment Agency. He mentions that the first edition of this book, Simulating Nature, has achieved some kind of cult status, while being now out of print, which is why he wrote this second edition. The book centres on the notion of uncertainty connected with computer simulations in the first part (pages 1-94) and on the same analysis applied to the simulation of climate change, based on the experience of the author, in the second part (pages 95-178). I must warn the reader that, as the second part got too focussed and acronym-filled for my own taste, I did not read it in depth, even though the issues of climate change and of the human role in this change are definitely of interest to me. (Readers of CHANCE must also realise that there is very little connection with Statistics in this book or my review of it!) Note that the final chapter is actually more of a neat summary of the book than a true conclusion, so a reader eager to get an idea about the contents of the book can grasp them through the eight pages of the eighth chapter.
“An example of the latter situation is a zero-dimensional (sic) model that aggregates all surface temperatures into a single zero-dimensional (re-sic) variable of globally averaged surface temperature.” (page 41)
The philosophical questions of interest therein are that a computer simulation of reality is not reproducing reality and that the uncertainty(ies) pertaining to this simulation cannot be assessed in its (their) entirety. (This the inherent meaning of the first quote, epistemic uncertainty relating to our lack of knowledge about the genuine model reproducing Nature or reality…) The author also covers the more practical issue of the interface between scientific reporting and policy making, which reminded me of Christl Donnelly’s talk at the ASC 2012 meeting (about cattle epidemics in England). The book naturally does not bring answers to any of those questions, naturally because a philosophical perspective should consider different sides of the problem, but I find it more interested in typologies and classifications (of types of uncertainties, in crossing those uncertainties with panel attitudes, &tc.) than in the fundamentals of simulation. I am obviously incompetent in the matter, however, as a naïve bystander, it does not seem to me that the book makes any significant progress towards setting epistemological and philosophical foundations for simulation. The part connected with the author’s implication in the IPCC shed more light on the difficulties to operate in committees and panels made of members with heavy political agendas than on the possible assessments of uncertainties within the models adopted by climate scientists…With the same provision as above, the philosophical aspects do not seem very deep: the (obligatory?!) reference to Karl Popper does not bring much to the debate, because what is falsification to simulation? Similarly, Lakatos’ prohibition of “direct[ing] the modus tollens at [the] hard core” (page 40) does not turn into a methodological assessment of simulation praxis.
“I argue that the application of statistical methods is not sufficient for adequately dealing with uncertainty.” (page 18)
“I agree (…) that the theory behind the concepts of random and systematic errors is purely statistical and not related to the locations and other dimensions of uncertainty.” (page 55)
Statistics is mostly absent from the book, apart from the remark that statistical uncertainty (understood as the imprecision induced by a finite amount of data) differs from modelling errors (the model is not reality), which the author considers cannot be handled by statistics (stating that Deborah Mayo‘s theory of statistical error analysis cannot be extended to simulation, see the footnote on page 55). [In other words, this book has no connection with Monte Carlo Statistical Methods! With or without capitals... Except for a mention of `real' random number generators on—one of many—footnotes on page 35.] Mention is made of “subjective probabilities” (page 54), presumably meaning a Bayesian perspective. But the distinction between statistical uncertainty and scenario uncertainty which “cannot be adequately described in terms of chances or probabilities” (page 54) misses the Bayesian perspective altogether, as does the following sentence that “specifying a degree of probability or belief [in such uncertainties] is meaningless since the mechanism that leads to the events are not sufficiently known” (page 54).
“Scientists can also give their subjective probability for a claim, representing their estimated chance that the claim is true. Provided that they indicate that their estimate for the probability is subjective, they are then explicitly allowing for the possibility that their probabilistic claim is dependent on expert judgement and may actually turn out to be false.” (page 57)
In conclusion, I fear the book does not bring enough of a conclusion on the philosophical justifications of using a simulation model instead of the actual reality and on the more pragmatic aspects of validating/invalidating a computer model and of correcting its imperfections with regards to data/reality. I am quite conscious that this is an immensely delicate issue and that, were it to be entirely solved, the current level of fight between climate scientists and climatoskeptics would not persist. As illustrated by the “Sound Science debate” (pages 68-70), politicians and policy-makers are very poorly equipped to deal with uncertainty and even less with decision under uncertainty. I however do not buy the (fuzzy and newspeak) concept of “post-normal science” developed in the last part of Chapter 4, where the scientific analysis of a phenomenon is abandoned for decision-making, “not pretend[ing] to be either value-free or ethically neutral” (page 75).
In addition to the solution to the wrong problem, Le Monde of last weekend also dedicated a full page of its Science leaflet to the coverage of Michael Mann’s hockey curve of temperature increase and the hard time he has been given by climato-skeptics since its publication in 1998… The page includes an insert on Ed Wegman’s 2006 [infamous] report for the U.S. Congress, amply documented on Andrew’s blog. And mentions the May 2011 editorial of Nature on the plagiarism investigation. (I reproduce it above as it is not available on the Le Monde website.)
A long editorial by Michael Stein in arXiv attracted my attention to an equally long discussion paper in the March 2011 issue of the Annals of Applied Statistics about paleoclimatology and potential consequences about climate change. I will wait for my hardcopy to arrive by surface mail before going into the paper and discussions, but I was surprised by the high degree of caution and the warnings in this editorial, as if it was trying to buffer incoming criticisms from pro- and anti-global warming groups (that are bound to happen given that climate change is the number one topic on forums of all kinds). It is interesting given that previous issues of Annals of Applied Statistics have also had their share of potentially controversial material, from JFK assassination, to the lost tomb of Jesus, radiations from portals, and so on. (Which is a fair way of attracting readers as long as the statistical quality is guaranteed, which is the case for AoAS!)
Twenty pupils in the class have different grades that are the integers from 1 to 20. The ten girls in the class are ordered from the best grade to the worst one, while the ten boys in the class are placed from the worst grade to the best one. The absolute differences between the pairs thus formed are computed and sum up. What is the range for this sum?
which is different from what I “read”, where both boys and girls were ranked in increasing order. Of course, “my” reading makes more sense (!) from a statistical point of view, because this defines a rank test for both samples having the same distribution. (The range is then between 10 and 100.) However, the solution to the original problem published in the weekend special edition is that the sum is always equal to 100. The argument is that any number less than 10 is paired with a number larger than 10, thus that the numbers larger than 10 get a positive sign, while the numbers less than 10 always get a negative factor, leading to
Obviously, this result holds for any balanced group of pupils. This is however much less interesting from a statistical perspective.
Ps- I found recently that both writers of the “Affaire de Logique” page in the weekend Le Monde magazine, Elisabeth Busser and Gilles Cohen, are in fact editors of a math fanzine called Tangente. Gilles Cohen wrote a laudatory review of the book, Le Mythe Climatique, by Benoît Rittaud, next to an explanation by Benoît Rittaud of the findings of Ed Wegman and of his Academy of Sciences committee about the hockey stick temperature curve. While the problem with the hockey stick is clear enough, the data being recentred only against recent observations, the explanations given in Tangente are fairly obscure. As a coincidence, Benoît Rittaud just decided to put his blog on hold and to move to a collective climatoskeptic blog called skyfall