## Archive for Nature

## Olympus at work [Nature snapshot]

Posted in Books, pictures, Travel, University life with tags camera, Nature, Olympus, Perth, Science, squat lobster, Western Australian Museum, where I work on June 28, 2020 by xi'an## Nature reflections on policing

Posted in Books, Kids, Statistics, University life with tags agent-based models, algorithmic policing, all models are wrong, George Floyd, Nature, non-violence, police, racism, systemic bias, systemic racism on June 24, 2020 by xi'an## Naturally amazed at non-identifiability

Posted in Books, Statistics, University life with tags Akaike's criterion, birth-and-death process, extant timetrees, identifiability, missing species problem, Nature, Ockham's razor, phylogeny, PNAS, population genetics, speciation rate, Steve Fienberg on May 27, 2020 by xi'an**A** Nature paper by Stilianos Louca and Matthew W. Pennell, *Extant time trees are consistent with a myriad of diversification histories*, comes to the extraordinary conclusion that birth-&-death evolutionary models cannot distinguish between several scenarios given the available data! Namely, stem ages and daughter lineage ages cannot identify the speciation rate function *λ(.)*, the extinction rate function *μ(.)* and the sampling fraction ρ inherently defining the deterministic ODE leading to the number of species predicted at any point τ in time, *N(τ)*. The Nature paper does not seem to make a point beyond the obvious and I am rather perplexed at why it got published [and even highlighted]. A while ago, under the leadership of Steve, PNAS decided to include statistician reviewers for papers relying on statistical arguments. It could time for Nature to move there as well.

“We thus conclude that two birth-death models are congruent if and only if they have the same r_{p}and the same λ_{p}at some time point in the present or past.” [S.1.1, p.4]

Or, stated otherwise, that a tree structured dataset made of branch lengths are not enough to identify two functions that parameterise the model. The likelihood looks like

$

where *E(.)* is the probability to survive to the present and *ψ(s,t)* the probability to survive and be sampled between times s and t. Sort of. Both functions depending on functions *λ(.) *and* μ(.)*. (When the stem age is unknown, the likelihood changes a wee bit, but with no changes in the qualitative conclusions. Another way to write this likelihood is in term of the speciation rate *λ _{p}*

where *Λ _{p}* is the integrated rate, but which shares the same characteristic of being unable to identify the functions

*λ(.)*and

*μ(.)*. While this sounds quite obvious the paper (or rather the supplementary material) goes into fairly extensive mode, including “abstract” algebra to define congruence.

“…we explain why model selection methods based on parsimony or “Occam’s razor”, such as the Akaike Information Criterion and the Bayesian Information Criterion that penalize excessive parameters, generally cannot resolve the identifiability issue…” [S.2, p15]

As illustrated by the above quote, the supplementary material also includes a section about statistical model selections techniques failing to capture the issue, section that seems superfluous or even absurd once the fact that the likelihood is constant across a congruence class has been stated.

## free Springer textbooks [incl. Bayesian essentials]

Posted in Statistics with tags All of Statistics, Bayesian Essentials with R, Bayesian textbook, coronavirus epidemics, lockdown, Nature, quarantine, R, Springer-Verlag, textbook on May 4, 2020 by xi'an## [Nature on] simulations driving the world’s response to COVID-19

Posted in Books, pictures, Statistics, Travel, University life with tags agent-based models, awful graphs, calibration, COVID-19, credible intervals, epidemiology, Nature, ODEs, predictive analytics, SIR, UK, United Kingdom on April 30, 2020 by xi'an**N**ature of 02 April 2020 has a special section on simulation methods used to assess and predict the pandemic evolution. Calling for caution as the models used therein, like the standard ODE S(E)IR models, which rely on assumptions on the spread of the data and very rarely on data, especially in the early stages of the pandemic. One epidemiologist is quote stating “We’re building simplified representations of reality” but this is not dire enough, as “simplified” evokes “less precise” rather than “possibly grossly misleading”. (The graph above is unrelated to the Nature cover and appears to me as particularly appalling in mixing different types of data, time-scale, population at risk, discontinuous updates, and essentially returning no information whatsoever.)

“[the model] requires information that can be only loosely estimated at the start of an epidemic, such as the proportion of infected people who die, and the basic reproduction number (…) rough estimates by epidemiologists who tried to piece together the virus’s basic properties from incomplete information in different countries during the pandemic’s early stages. Some parameters, meanwhile, must be entirely assumed.”

The report mentions that the team at Imperial College, which predictions impacted the UK Government decisions, also used an agent-based model, with more variability or stochasticity in individual actions, which require even more assumptions or much more refined, representative, and trustworthy data.

“Unfortunately, during a pandemic it is hard to get data — such as on infection rates — against which to judge a model’s projections.”

Unfortunately, the paper was written in the early days of the rise of cases in the UK, which means predictions were not much opposed to actual numbers of deaths and hospitalisations. The following quote shows how far off they can fall from reality:

“the British response, Ferguson said on 25 March, makes him “reasonably confident” that total deaths in the United Kingdom will be held below 20,000.”

since the total number as of April 29 is above ~~21,000~~ ~~24,000~~ 29,750 and showing no sign of quickly slowing down… A quite useful general public article, nonetheless.