## Archive for Astrophysics

## black holes capture Nobel

Posted in Statistics, Travel, University life with tags astronomy, Astrophysics, black holes, Garching, Max Planck Institute, Milky Way, Nobel Prize, relativity on October 7, 2020 by xi'an## meet the black heart of Messier

Posted in pictures, Travel, University life with tags Astrophysics, Atacama, black holes, ERC, Event Horizon Telescope, galaxy Messier 87, M87, NSF, relativity, The Astrophysical Journal Letters on April 10, 2019 by xi'an## trip to München

Posted in Mountains, Statistics, Travel, University life, Wines with tags ABC, Astrophysics, Bavaria, Charles de Gaulle, dark matter, Eisbier, Germany, Max Planck Institute, Munich, particle physics, population Monte Carlo, RER B, Roissy, Wener-Heisenberg-Institut on October 19, 2015 by xi'an**W**hile my train ride to the fabulous De Gaulle airport was so much delayed that I had less than ten minutes from jumping from the carriage to sitting in my plane seat, I handled the run through security and the endless corridors of the airport in the allotted time, and reached Munich in time for my afternoon seminar and several discussions that prolonged into a pleasant dinner of Wiener Schnitzel and Eisbier. This was very exciting as I met physicists and astrophysicists involved in population Monte Carlo and parallel MCMC and manageable harmonic mean estimates and intractable ABC settings (because simulating the data takes eons!). I wish the afternoon could have been longer. And while this is the third time I come to Munich, I still have not managed to see the centre of town! Or even the nearby mountains. Maybe an unsuspected consequence of the Heisenberg principle…

## Bayesian model averaging in astrophysics

Posted in Books, Statistics, University life with tags adaptive importance sampling, Astrophysics, Bayes factor, bridge sampling, computational statistics, evidence, likelihood, model averaging, Monte Carlo technique, population Monte Carlo, statistical analysis and data mining on July 29, 2015 by xi'an*[A 2013 post that somewhat got lost in a pile of postponed entries and referee’s reports…]*

**I**n this review paper, now published in *Statistical Analysis and Data Mining* 6, 3 (2013), David Parkinson and Andrew R. Liddle go over the (Bayesian) model selection and model averaging perspectives. Their argument in favour of model averaging is that model selection via Bayes factors may simply be too inconclusive to favour one model and only one model. While this is a correct perspective, this is about it for the theoretical background provided therein. The authors then move to the computational aspects and the first difficulty is their approximation (6) to the evidence

where they average the *likelihood x prior* terms over simulations from the posterior, which does not provide a valid (either unbiased or converging) approximation. They surprisingly fail to account for the huge statistical literature on evidence and Bayes factor approximation, incl. Chen, Shao and Ibrahim (2000). Which covers earlier developments like bridge sampling (Gelman and Meng, 1998).

As often the case in astrophysics, at least since 2007, the authors’ description of nested sampling drifts away from perceiving it as a regular Monte Carlo technique, with the same convergence speed n^{1/2} as other Monte Carlo techniques and the same dependence on dimension. It is certainly not the only simulation method where the produced “samples, as well as contributing to the evidence integral, can also be used as posterior samples.” The authors then move to “population Monte Carlo [which] is an adaptive form of importance sampling designed to give a good estimate of the evidence”, a particularly restrictive description of a generic adaptive importance sampling method (Cappé et al., 2004). The approximation of the evidence (9) based on PMC also seems invalid:

is missing the prior in the numerator. (The switch from θ in Section 3.1 to X in Section 3.4 is confusing.) Further, the sentence “PMC gives an unbiased estimator of the evidence in a very small number of such iterations” is misleading in that PMC is unbiased at each iteration. Reversible jump is not described at all (the supposedly higher efficiency of this algorithm is far from guaranteed when facing a small number of models, which is the case here, since the moves between models are governed by a random walk and the acceptance probabilities can be quite low).

The second quite unrelated part of the paper covers published applications in astrophysics. Unrelated because the three different methods exposed in the first part are not compared on the same dataset. Model averaging is obviously based on a computational device that explores the posteriors of the different models under comparison (or, rather, averaging), however no recommendation is found in the paper as to efficiently implement the averaging or anything of the kind. In conclusion, I thus find this review somehow anticlimactic.

## L’Aquila: earthquake, verdict, and statistics

Posted in Statistics, University life with tags Astrophysics, Cardiff, earthquake, Giordano Bruno, In the Dark, Italy, journalism, L'Aquila, scientific communication, Statistics on October 25, 2012 by xi'an**Y**esterday I read this blog entry by Peter Coles, a Professor of Theoretical Astrophysics at Cardiff and soon in Brighton, about L’Aquila earthquake verdict, condemning six Italian scientists to severe jail sentences. While most of the blogs around reacted against this verdict as an anti-scientific decision and as a 21st Century remake of Giordano Bruno‘s murder by the Roman Inquisition, Peter Coles argues in the opposite that the scientists were not scientific *enough* in that instance. And should have used statistics and probabilistic reasoning. While I did not look into the details of the L’Aquila earthquake judgement and thus have no idea whether or not the scientists were guilty in not signalling the potential for disaster, were an earthquake to occur, I cannot but repost one of Coles’ most relevant paragraphs:

I thought I’d take this opportunity to repeat the reasons I think statistics and statistical reasoning are so important. Of course they are important in science. In fact, I think they lie at the very core of the scientific method, although I am still surprised how few practising scientists are comfortable even with statistical language. A more important problem is the popular impression that science is about facts and absolute truths. It isn’t. It’s a process. In order to advance, it has to question itself.

Statistical reasoning also applies outside science to many facets of everyday life, including business, commerce, transport, the media, and politics. It is a feature of everyday life that science and technology are deeply embedded in every aspect of what we do each day. Science has given us greater levels of comfort, better health care, and a plethora of labour-saving devices. It has also given us unprecedented ability to destroy the environment and each other, whether through accident or design. Probability even plays a role in personal relationships, though mostly at a subconscious level.

**A** bit further down, Peter Coles also bemoans the shortcuts and oversimplification of scientific journalism, which reminded me of the time Jean-Michel Marin had to deal with radio journalists about an “impossible” lottery coincidence:

Years ago I used to listen to radio interviews with scientists on the Today programme on BBC Radio 4. I even did such an interview once. It is a deeply frustrating experience. The scientist usually starts by explaining what the discovery is about in the way a scientist should, with careful statements of what is assumed, how the data is interpreted, and what other possible interpretations might be and the likely sources of error. The interviewer then loses patience and asks for a yes or no answer. The scientist tries to continue, but is badgered. Either the interview ends as a row, or the scientist ends up stating a grossly oversimplified version of the story.