Archive for ASC 2012

simulating Nature

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , on July 25, 2012 by xi'an

This book, Simulating Nature: A Philosophical Study of Computer-Simulation Uncertainties and Their Role in Climate Science and Policy Advice, by Arthur C. Petersen, was sent to me twice by the publisher for reviewing it for CHANCE. As I could not find a nearby “victim” to review the book, I took it with me to Australia and read it by bits and pieces along the trip.

“Models are never perfectly reliable, and we are always faced with ontic uncertainty and epistemic uncertainty, including epistemic uncertainty about ontic uncertainty.” (page 53)

The author, Arthur C. Petersen, was a member of the United Nations’ Intergovernmental Panel on Climate Change (IPCC) and works as chief scientist at the PBL Netherlands Environmental Assessment Agency. He mentions that the first edition of this book, Simulating Nature, has achieved some kind of cult status, while being now out of print,  which is why he wrote this second edition. The book centres on the notion of uncertainty connected with computer simulations in the first part (pages 1-94) and on the same analysis applied to the simulation of climate change, based on the experience of the author, in the second part (pages 95-178). I must warn the reader that, as the second part got too focussed and acronym-filled for my own taste, I did not read it in depth, even though the issues of climate change and of the human role in this change are definitely of interest to me. (Readers of CHANCE must also realise that there is very little connection with Statistics in this book or my review of it!) Note that the final chapter is actually more of a neat summary of the book than a true conclusion, so a reader eager to get an idea about the contents of the book can grasp them through the eight pages of the eighth chapter.

“An example of the latter situation is a zero-dimensional (sic) model that aggregates all surface temperatures into a single zero-dimensional (re-sic) variable of globally averaged surface temperature.” (page 41)

The philosophical questions of interest therein are that a computer simulation of reality is not reproducing reality and that the uncertainty(ies) pertaining to this simulation cannot be assessed in its (their) entirety. (This the inherent meaning of the first quote, epistemic uncertainty relating to our lack of knowledge about the genuine model reproducing Nature or reality…) The author also covers the more practical issue of the interface between scientific reporting and policy making, which reminded me of Christl Donnelly’s talk at the ASC 2012 meeting (about cattle epidemics in England). The book naturally does not bring answers to any of those questions, naturally because a philosophical perspective should consider different sides of the problem, but I find it more interested in typologies and classifications (of types of uncertainties, in crossing those uncertainties with panel attitudes, &tc.) than in the fundamentals of simulation. I am obviously incompetent in the matter, however, as a naïve bystander, it does not seem to me that the book makes any significant progress towards setting epistemological and philosophical foundations for simulation. The part connected with the author’s implication in the IPCC shed more light on the difficulties to operate in committees and panels made of members with heavy political agendas than on the possible assessments of uncertainties within the models adopted by climate scientists…With the same provision as above, the philosophical aspects do not seem very deep: the (obligatory?!) reference to Karl Popper does not bring much to the debate, because what is falsification to simulation? Similarly, Lakatos’ prohibition of “direct[ing] the modus tollens at [the] hard core” (page 40) does not turn into a methodological assessment of simulation praxis.

“I argue that the application of statistical methods is not sufficient for adequately dealing with uncertainty.” (page 18)

“I agree (…) that the theory behind the concepts of random and systematic errors is purely statistical and not related to the locations and other dimensions of uncertainty.” (page 55)

Statistics is mostly absent from the book, apart from the remark that statistical uncertainty (understood as the imprecision induced by a finite amount of data) differs from modelling errors (the model is not reality), which the author considers cannot be handled by statistics (stating that Deborah Mayo‘s theory of statistical error analysis cannot be extended to simulation, see the footnote on page 55). [In other words, this book has no connection with Monte Carlo Statistical Methods! With or without capitals… Except for a mention of `real’ random number generators on—one of many—footnotes on page 35.]  Mention is made of “subjective probabilities” (page 54), presumably meaning a Bayesian perspective. But the distinction between statistical uncertainty and scenario uncertainty which “cannot be adequately described in terms of chances or probabilities” (page 54) misses the Bayesian perspective altogether, as does the following sentence that “specifying a degree of probability or belief [in such uncertainties] is meaningless since the mechanism that leads to the events are not sufficiently known” (page 54).

“Scientists can also give their subjective probability for a claim, representing their estimated chance that the claim is true. Provided that they indicate that their estimate for the probability is subjective, they are then explicitly allowing for the possibility that their probabilistic claim is dependent on expert judgement and may actually turn out to be false.” (page 57)

In conclusion, I fear the book does not bring enough of a conclusion on the philosophical justifications of using a simulation model instead of the actual reality and on the more pragmatic aspects of validating/invalidating a computer model and of correcting its imperfections with regards to data/reality. I am quite conscious that this is an immensely delicate issue and that, were it to be entirely solved, the current level of fight between climate scientists and climatoskeptics would not persist. As illustrated by the “Sound Science debate” (pages 68-70), politicians and policy-makers are very poorly equipped to deal with uncertainty and even less with decision under uncertainty. I however do not buy the (fuzzy and newspeak) concept of “post-normal science” developed in the last part of Chapter 4, where the scientific analysis of a phenomenon is abandoned for decision-making, “not pretend[ing] to be either value-free or ethically neutral” (page 75).

AMSI-SSAI Lecture #1 at University of Adelaide

Posted in pictures, Statistics, Travel, University life with tags , , , , on July 16, 2012 by xi'an

On Friday, I gave my first AMSI Lecture, at the University of Adelaide. The talk attracted a fair number of people, esp. when considering that I had already given a talk on ABC the day before. There also were several interesting questions at the end, mostly related to the (ABC) empirical likelihood part which seems to have a high power of attraction! This talk furthermore gave me the opportunity to visit the superb Ingkarni Wardli building housing math and engineering. This means “house of enquiry” in the local indigenous language, well-suited to a science building indeed! (My next talk is at UNSW on Monday afternoon, the very same talk I gave at the ASC 2012 conference last Thursday.)

ACS 2012 (#2)

Posted in pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on July 12, 2012 by xi'an

This morning, after a nice and cool run along the river Torrens amidst almost unceasing bird songs, I attended another Bayesian ASC 2012 session with Scott Sisson presenting a simulation method aimed at correcting for biased confidence intervals and Robert Kohn giving the same talk in Kyoto. Scott’s proposal, which is rather similar to parametric bootstrap bias correction, is actually more frequentist than Bayesian as the bias is defined in terms of an correct frequentist coverage of a given confidence (or credible) interval. (Thus making the connection with Roderick Little’s calibrated Bayes talk of yesterday.) This perspective thus perceives ABC as a particular inferential method, instead of a computational approximation to the genuine Bayesian object. (We will certainly discuss the issue with Scott next week in Sydney.)

Then Peter Donnely gave a particularly exciting and well-attended talk on the geographic classification of humans, in particular of the (early 1900’s) population of the British isles, based on a clever clustering idea derived from an earlier paper of Na Li and Matthew Stephens: using genetic sequences from a group of individuals, each individual was paired with the rest of the sample as if it descended from this population. Using an HMM model, this led to clustering the sample into about 50 groups, with a remarkable geographic homogeneity: for instance, Cornwall and Devon made two distinct groups, an English speaking pocket of Wales (Little England) was identified as a specific group and so on, the central, eastern and southern England constituting an homogenous group of its own…

quick impressions from Japan

Posted in pictures, Running, Travel with tags , , , , , , , , , on July 9, 2012 by xi'an

Just like last year trip to Shanghai was my first visit to China, this trip to Kyoto was my first time in Japan. I found the experience so exhilarating that I am already considering a trip back next year! (Especially since I could have lost all of my Kyoto pictures with my hard drive!) The mix of tradition and modernity, of history and high-tech, of chaotic architecture and smooth interactions, of rice fields in the shade of high-rises and of houses in the shade of expressways; all those snapshots, caught from the train as much as from the few hours I spent exploring the modern part of the city, are starting to blur already and I feel a need to go beyond to get a broader and deeper perspective on this fascinating country! Continue reading