Archive for data

the biggest bluff [not a book review]

Posted in Books with tags , , , , , , , , , , , on August 14, 2020 by xi'an

It came as a surprise to me that the book reviewed in the book review section of Nature of 25 June was a personal account of a professional poker player, The Biggest Bluff by Maria Konnikova.  (Surprise enough to write a blog entry!) As I see very little scientific impetus in studying the psychology of poker players and the associated decision making. Obviously, this is not a book review, but a review of the book review. (Although the NYT published a rather extensive extract of the book, from which I cannot detect anything deep from a game-theory viewpoint. Apart from the maybe-not-so-deep message that psychology matters a lot in poker…) Which does not bring much incentive for those uninterested (or worse) in money games like poker. Even when “a heap of Bayesian model-building [is] thrown in”, as the review mixes randomness and luck, while seeing the book as teaching the reader “how to play the game of life”, a type of self-improvement vending line one hardly expects to read in a scientific journal. (But again I have never understood the point in playing poker…)

politics coming [too close to] statistics [or the reverse]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , on May 9, 2020 by xi'an

On 30 April, David Spiegelhalter wrote an opinion column in The Guardian, Coronavirus deaths: how does Britain compare with other countries?, where he pointed out the difficulty, even “for a bean-counting statistician to count deaths”, as the reported figures are undercounts, and stated that “many feel that excess deaths give a truer picture of the impact of an epidemic“. Which, on the side, I indeed believe is a more objective material, as also reported by INSEE and INED in France.

“…my cold, statistical approach is to wait until the end of the year, and the years after that, when we can count the excess deaths. Until then, this grim contest won’t produce any league tables we can rely on.” D. Spiegelhalter

My understanding of the tribune is that the quick accumulation of raw numbers, even for deaths, and their use in the comparison of procedures and countries is not helping in understanding the impacts of policies and actions-reactions from a week ago. Starting with the delays in reporting death certificates, as again illustrated by the ten day lag in the INSEE reports. And accounting for covariates such as population density, economic and health indicators. (The graph below for instance relies on deaths so far attributed to COVID-19 rather than on excess deaths, while these attributions depend on the country policy and its official statistics capacities.)

“Polite request to PM and others: please stop using my Guardian article to claim we cannot make any international comparisons yet. I refer only to detailed league tables—of course we should now use other countries to try and learn why our numbers are high.” D. Spiegelhalter

However, when on 6 May Boris Johnson used this Guardian article during prime minister’s questions in the UK Parliement, to defuse a question from the Labour leader, Keir Starmer, David Spiegelhalter reacted with the above tweet, which is indeed that even with poor and undercounted data the total number of cases is much worse than predicted by the earlier models and deadlier than in neighbouring countries. Anyway, three other fellow statisticians, Phil Brown, Jim Smith (Warwick), and Henry Wynn, also reacted to David’s tribune by complaining at the lack of statistical modelling behind it and the fatalistic message it carries, advocating for model based decision-making, which would be fine if the data was not so unreliable… or if the proposed models were equipped with uncertainty bumpers accounting for misspecification and erroneous data.

data is everywhere

Posted in Kids, pictures, Statistics, University life with tags , , , , , , , , on November 25, 2018 by xi'an

agent-based models

Posted in Books, pictures, Statistics with tags , , , , , , , , on October 2, 2018 by xi'an

An August issue of Nature I recently browsed [on my NUS trip] contained a news feature on agent- based models applied to understanding the opioid crisis in US. (With a rather sordid picture of a drug injection in Philadelphia, hence my own picture.)

To create an agent-based model, researchers first ‘build’ a virtual town or region, sometimes based on a real place, including buildings such as schools and food shops. They then populate it with agents, using census data to give each one its own characteristics, such as age, race and income, and to distribute the agents throughout the virtual town. The agents are autonomous but operate within pre-programmed routines — going to work five times a week, for instance. Some behaviours may be more random, such as a 5% chance per day of skipping work, or a 50% chance of meeting a certain person in the agent’s network. Once the system is as realistic as possible, the researchers introduce a variable such as a flu virus, with a rate and pattern of spread based on its real-life characteristics. They then run the simulation to test how the agents’ behaviour shifts when a school is closed or a vaccination campaign is started, repeating it thousands of times to determine the likelihood of different outcomes.

While I am obviously supportive of simulation based solutions, I cannot but express some reservation at the outcome, given that it is the product of the assumptions in the model. In Bayesian terms, this is purely prior predictive rather than posterior predictive. There is no hard data to create “realism”, apart from the census data. (The article also mixes the outcome of the simulation with real data. Or epidemiological data, not yet available according to the authors.)

In response to the opioid epidemic, Bobashev’s group has constructed Pain Town — a generic city complete with 10,000 people suffering from chronic pain, 70 drug dealers, 30 doctors, 10 emergency rooms and 10 pharmacies. The researchers run the model over five simulated years, recording how the situation changes each virtual day.

This is not to criticise the use of such tools to experiment with social, medical or political interventions, which practically and ethically cannot be tested in real life and working with such targeted versions of the Sims game can paradoxically be more convincing when dealing with policy makers. If they do not object at the artificiality of the outcome, as they often do for climate change models. Just from reading this general public article, I thus wonder at whether model selection and validation tools are implemented in conjunction with agent-based models…

Is nothing > data?

Posted in pictures, Statistics, University life with tags , , , , on July 31, 2017 by xi'an

A fairly interesting take on whether or not data should be singular (an issue that does not occur in French!):

In the Dark

I got this yesterday from one of my office mates who suggested that I stick it somewhere. It’s an advert for a data science company called Pivigo. Logically, the statement on the sticker implies that data is less than nothing, which I don’t think is the point that they’re trying to make. On the other hand, I suppose that by posting this I’ve given Pivigo some free advertising so in some sense it is a successful promotional ploy!

Anyway, when I posted this on Twitter it sparked a little discussion about the vexed issue of whether the word `data’ is singular or plural, so I decided to bore my readers with thoughts on that – not that I’m pedantic or anything.

The word `data’ is formed from the latin plural of the word `datum’ (itself formed from the past participle of the latin verb `dare’, meaning `to give’) hence meaning…

View original post 752 more words