**A** few years ago, I was asked by Isabelle Drouet to contribute a chapter to a multi-disciplinary book on the Bayesian paradigm, book that is now soon to appear. In French. It has this rather ugly title of *Bayesianism today*. Not that I had hear of *Bayesianism* or *bayésianime* previously. There are chapters on the Bayesian notion(s) of probability, game theory, statistics, on applications, and on the (potentially) Bayesian structure of human intelligence. Most of it is thus outside statistics, but I will certainly read through it when I receive my copy.

## Archive for epistemology

## Le bayésianisme aujourd’hui

Posted in Books, Statistics with tags Bayesianism, book chapter, epistemology, foundations, France, French book, game theory, Philosophy of Science on September 19, 2016 by xi'an## can we trust computer simulations? [day #2]

Posted in Books, pictures, Statistics, Travel, University life with tags ABC, chaos, climate, climatic-skeptics, design of experiments, ecological models, Ecology, epistemology, equivalence test, ergodicity, Germany, Hannover, Herrenhausen Palace, manufactured solutions, philosophy of sciences, reproducible research, simulation, sociology, state space model, synthetic likelihood, validation, verification, WYSINWYG on July 13, 2015 by xi'an*“Sometimes the models are better than the data.” G. Krinner*

**S**econd day at the conference on building trust in computer simulations. Starting with a highly debated issue, climate change projections. Since so many criticisms are addressed to climate models as being not only wrong but also unverifiable. And uncheckable. As explained by Gerhart Krinner, the IPCC has developed methodologies to compare models and evaluate predictions. However, from what I understood, this validation does not say anything about the future, which is the part of the predictions that matters. And that is attacked by critics and feeds climatic-skeptics. Because it is so easy to argue against the homogeneity of the climate evolution and for “*what you’ve seen is not what you’ll get*“! (Even though climatic-skeptics are the least likely to use this time-heterogeneity argument, being convinced as they are of the lack of human impact over the climate.) The second talk was by Viktoria Radchuk about validation in ecology. Defined here as a test of predictions against independent data (and designs). And mentioning Simon Wood’s synthetic likelihood as the Bayesian reference for conducting model choice (as a synthetic likelihoods ratio). I had never thought of this use (found in Wood’s original paper) for synthetic likelihood, I feel a bit queasy about using a synthetic likelihood ratio as a genuine likelihood ratio. Which led to a lively discussion at the end of her talk. The next talk was about validation in economics by Matteo Richiardi, who discussed state-space models where the hidden state is observed through a summary statistic, perfect playground for ABC! But Matteo opted instead for a non-parametric approach that seems to increase imprecision and that I have never seen used in state-space models. The last part of the talk was about non-ergodic models, for which checking for validity becomes much more problematic, in my opinion. Unless one manages multiple observations of the non-ergodic path. Nicole Saam concluded this “Validation in…” morning with Validation in Sociology. With a more pessimistic approach to the possibility of finding a falsifying strategy, because of the vague nature of sociology models. For which data can never be fully informative. She illustrated the issue with an EU negotiation analysis. Where most hypotheses could hardly be tested.

“Bayesians persist with poor examples of randomness.” L. Smith

“Bayesians can be extremely reasonable.” L. Smith

The afternoon session was dedicated to methodology, mostly statistics! Andrew Robinson started with a talk on (frequentist) model validation. Called splitters and lumpers. Illustrated by a forest growth model. He went through traditional hypothesis tests like Neyman-Pearson’s that try to split between samples. And (bio)equivalence tests that take difference as the null. Using his equivalence R package. Then Leonard Smith took over [in a literal way!] from a sort-of-Bayesian perspective, in a work joint with Jim Berger and Gary Rosner on pragmatic Bayes which was mostly negative about Bayesian modelling. Introducing (to me) the compelling notion of structural model error as a representation of the inadequacy of the model. With illustrations from weather and climate models. His criticism of the Bayesian approach is that it cannot be holistic while pretending to be [my wording]. And being inadequate to measure model inadequacy, to the point of making prior choice meaningless. Funny enough, he went back to the ball dropping experiment David Higdon discussed at one JSM I attended a while ago, with the unexpected outcome that one ball did not make it to the bottom of the shaft. A more positive side was that posteriors are useful models but should not be interpreted from a probabilistic perspective. Move beyond probability was his final message. (For most of the talk, I misunderstood P(BS), the probability of a big surprise, for something else…) This was certainly the most provocative talk of the conference and the discussion could have gone on for the rest of day! Somewhat, Lenny was voluntarily provocative in piling the responsibility upon the Bayesian’s head for being overconfident and not accounting for the physicist’ limitations in modelling the phenomenon of interest. Next talk was by Edward Dougherty on methods used in biology. He separated within-model uncertainty from outside-model inadequacy. The within model part is mostly easy to agree upon. Even though difficulties in estimating parameters creates uncertainty classes of models. Especially because of being from a small data discipline. He analysed the impact of machine learning techniques like classification as being useless without prior knowledge. And argued in favour of the Bayesian minimum mean square error estimator. Which can also lead to a classifier. And experimental design. (Using MSE seems rather reductive when facing large dimensional parameters.) Last talk of the day was by Nicolas Becu, a geographer, with a surprising approach to validation via stakeholders. A priori not too enticing a name! The discussion was of a more philosophical nature, going back to (re)define validation against reality and imperfect models. And including social aspects of validation, e.g., reality being socially constructed. This led to the stakeholders, because a model is then a shared representation. Nicolas illustrated the construction by simulation “games” of a collective model in a community of Thai farmers and in a group of water users.

In a rather unique fashion, we also had an evening discussion on points we share and points we disagreed upon. After dinner (and wine), which did not help I fear! Bill Oberkampf mentioned the use of manufactured solutions to check code, which seemed very much related to physics. But then we got mired into the necessity of dividing between verification and validation. Which sounded very and too much engineering-like to me. Maybe because I do not usually integrate coding errors and algorithmic errors into my reasoning (verification)… Although sharing code and making it available makes a big difference. Or maybe because considering *all* models are wrong is neither part of my methodology (validation). This part ended up in a fairly pessimistic conclusion on the lack of trust in most published articles. At least in the biological sciences.

## can we trust computer simulations?

Posted in Books, pictures, Statistics, University life with tags atmospheric models, Bayesian epistemology, climate simulation, computer model, conference, confirmation, epistemology, Fortran, Hannover, Hempel, Hertzsprung-Russell diagram, Karl Popper, model uncertainty, Monash University, philosophy of sciences, scientific computing, Society for Imprecise Probability, truth, validation, verification on July 10, 2015 by xi'an**H***ow can one validate the outcome of a validation model? Or can we even imagine validation of this outcome?* This was the starting question for the conference I attended in Hannover. Which obviously engaged me to the utmost. Relating to some past experiences like advising a student working on accelerated tests for fighter electronics. And failing to agree with him on validating a model to turn those accelerated tests within a realistic setting. Or reviewing this book on climate simulation three years ago while visiting Monash University. Since I discuss in details below most talks of the day, here is an opportunity to opt away! Continue reading

## snapshot from Hannover

Posted in pictures, Running, Travel, University life with tags epistemology, Germany, Hannover, neue Rathaus, philosophy of sciences, Saxony, simulation on July 9, 2015 by xi'an## how to build trust in computer simulations: Towards a general epistemology of validation

Posted in Books, pictures, Statistics, Travel, University life with tags epistemology, Germany, Hannover, Philosophy of Science, simulation model on July 8, 2015 by xi'an**I** have rarely attended a workshop with such a precise goal, but then I have neither ever attended a philosophy workshop… Tonight, I am flying to Han(n)over, Lower Saxony, for a workshop on the philosophical aspects of simulated models. I was quite surprised to get invited to this workshop, but found it quite a treat to attend a multi-disciplinary meeting about simulations and their connection with the real world! I am less certain I can contribute anything meaningful, but still look forward to it. And will report on the discussions, hopefully. Here is the general motivation of the workshop:

“In the last decades, our capacities to investigate complex systems of various scales have been greatly enhanced by the method of computer simulation. This progress is not without a price though: We can only trust the results of computer simulations if they have been properly validated, i.e., if they have been shown to be reliable. Despite its importance, validation is often still neglected in practice and only poorly understood from a theoretical perspective. The aim of this conference is to discuss methodological and philosophical problems of validation from a multidisciplinary perspective and to take first steps in developing a general framework for thinking about validation. Working scientists from various natural and social sciences and philosophers of science join forces to make progress in understanding the epistemology of validation.”

## Reference priors for the law of natural induction

Posted in Statistics with tags Bayesian statistics, epistemology, reference priors on March 13, 2009 by xi'an**J**im Berger, José Bernardo and Dongchu Sun have written a paper on reference/objective priors to be used in hypergeometric sampling or, in more historical terms, for the law of natural induction. (This is an invited discussion paper for the ** Revista de la Real Academia of Ciencias, Series A Matemáticas**). This paper is interesting in that, by using an argument previously made by Harold Jeffreys in Theory of Probability, it overcomes a strong difficulty with the standard Bayesian analysis of the law of natural induction, which gives the probability that all members of a population (say, swans) are of the same kind as those observed so far (say, white swans). In the standard approach, the probability that the next draw is of the same kind as the

*n*previous ones is , while the probability that all

*N*members are of the same kind is . These probabilities are paradoxically opposed when

*N*is large against

*n*. Using a representation of the problem in an hypothesis testing framework, as in Jeffreys’s, Jim Berger, José Bernardo and Dongchu Sun give a reference prior that lead to probabilities that behave like and (approximately) , hence making them compatible. Of course, this is not going to solve the (unrelated) debate about the relevance of this rule for finding black swans, but this is a fairly interesting result!