Archive for simulation model

simulating the pandemic

Posted in Books, Statistics with tags , , , , , , , , , , , on November 28, 2020 by xi'an

Nature of 13 November has a general public article on simulating the COVID pandemic as benefiting from the experience gained by climate-modelling methodology.

“…researchers didn’t appreciate how sensitive CovidSim was to small changes in its inputs, their results overestimated the extent to which a lockdown was likely to reduce deaths…”

The argument is essentially Bayesian, namely rather than using a best guess of the parameters of the model, esp. given the state of the available data (and the worse for March). When I read

“…epidemiologists should stress-test their simulations by running ‘ensemble’ models, in which thousands of versions of the model are run with a range of assumptions and inputs, to provide a spread of scenarios with different probabilities…”

it sounds completely Bayesian. Even though there is no discussion of the prior modelling or of the degree of wrongness of the epidemic model itself. The researchers at UCL who conducted the multiple simulations and the assessment of sensitivity to the 940 various parameters found that 19 of them had a strong impact, mostly

“…the length of the latent period during which an infected person has no symptoms and can’t pass the virus on; the effectiveness of social distancing; and how long after getting infected a person goes into isolation…”

but this outcome is predictable (and interesting). Mentions of Bayesian methods appear at the end of the paper:

“…the uncertainty in CovidSim inputs [uses] Bayesian statistical tools — already common in some epidemiological models of illnesses such as the livestock disease foot-and-mouth.”

and

“Bayesian tools are an improvement, says Tim Palmer, a climate physicist at the University of Oxford, who pioneered the use of ensemble modelling in weather forecasting.”

along with ensemble modelling, which sounds a synonym for Bayesian model averaging… (The April issue on the topic had also Bayesian aspects that were explicitely mentionned.)

agent-based models

Posted in Books, pictures, Statistics with tags , , , , , , , , on October 2, 2018 by xi'an

An August issue of Nature I recently browsed [on my NUS trip] contained a news feature on agent- based models applied to understanding the opioid crisis in US. (With a rather sordid picture of a drug injection in Philadelphia, hence my own picture.)

To create an agent-based model, researchers first ‘build’ a virtual town or region, sometimes based on a real place, including buildings such as schools and food shops. They then populate it with agents, using census data to give each one its own characteristics, such as age, race and income, and to distribute the agents throughout the virtual town. The agents are autonomous but operate within pre-programmed routines — going to work five times a week, for instance. Some behaviours may be more random, such as a 5% chance per day of skipping work, or a 50% chance of meeting a certain person in the agent’s network. Once the system is as realistic as possible, the researchers introduce a variable such as a flu virus, with a rate and pattern of spread based on its real-life characteristics. They then run the simulation to test how the agents’ behaviour shifts when a school is closed or a vaccination campaign is started, repeating it thousands of times to determine the likelihood of different outcomes.

While I am obviously supportive of simulation based solutions, I cannot but express some reservation at the outcome, given that it is the product of the assumptions in the model. In Bayesian terms, this is purely prior predictive rather than posterior predictive. There is no hard data to create “realism”, apart from the census data. (The article also mixes the outcome of the simulation with real data. Or epidemiological data, not yet available according to the authors.)

In response to the opioid epidemic, Bobashev’s group has constructed Pain Town — a generic city complete with 10,000 people suffering from chronic pain, 70 drug dealers, 30 doctors, 10 emergency rooms and 10 pharmacies. The researchers run the model over five simulated years, recording how the situation changes each virtual day.

This is not to criticise the use of such tools to experiment with social, medical or political interventions, which practically and ethically cannot be tested in real life and working with such targeted versions of the Sims game can paradoxically be more convincing when dealing with policy makers. If they do not object at the artificiality of the outcome, as they often do for climate change models. Just from reading this general public article, I thus wonder at whether model selection and validation tools are implemented in conjunction with agent-based models…

how to build trust in computer simulations: Towards a general epistemology of validation

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , on July 8, 2015 by xi'an

I have rarely attended a workshop with such a precise goal, but then I have neither ever attended a philosophy workshop… Tonight, I am flying to Han(n)over, Lower Saxony, for a workshop on the philosophical aspects of simulated models. I was quite surprised to get invited to this workshop, but found it quite a treat to attend a multi-disciplinary meeting about simulations and their connection with the real world! I am less certain I can contribute anything meaningful, but still look forward to it. And will report on the discussions, hopefully. Here is the general motivation of the workshop:

“In the last decades, our capacities to investigate complex systems of various scales have been greatly enhanced by the method of computer simulation. This progress is not without a price though: We can only trust the results of computer simulations if they have been properly validated, i.e., if they have been shown to be reliable. Despite its importance, validation is often still neglected in practice and only poorly understood from a theoretical perspective. The aim of this conference is to discuss methodological and philosophical problems of validation from a multidisciplinary perspective and to take first steps in developing a general framework for thinking about validation. Working scientists from various natural and social sciences and philosophers of science join forces to make progress in understanding the epistemology of validation.”

a week in Warwick

Posted in Books, Kids, Running, Statistics, University life with tags , , , , , , , , , , , , on October 19, 2014 by xi'an

Canadian geese, WarwickThis past week in Warwick has been quite enjoyable and profitable, from staying once again in a math house, to taking advantage of the new bike, to having several long discussions on several prospective and exciting projects, to meeting with some of the new postdocs and visitors, to attending Tony O’Hagan’s talk on “wrong models”. And then having Simo Särkkä who was visiting Warwick this week discussing his paper with me. And Chris Oates doing the same with his recent arXival with Mark Girolami and Nicolas Chopin (soon to be commented, of course!). And managing to run in dry conditions despite the heavy rains (but in pitch dark as sunrise is now quite late, with the help of a headlamp and the beauty of a countryside starry sky). I also evaluated several students’ projects, two of which led me to wonder when using RJMCMC was appropriate in comparing two models. In addition, I also eloped one evening to visit old (1977!) friends in Northern Birmingham, despite fairly dire London Midlands performances between Coventry and Birmingham New Street, the only redeeming feature being that the connecting train there was also late by one hour! (Not mentioning the weirdest taxi-driver ever on my way back, trying to get my opinion on whether or not he should have an affair… which at least kept me awake the whole trip!) Definitely looking forward my next trip there at the end of November.