Archive for counterfactuals

efficient measures?

Posted in Books, Statistics, University life with tags , , , , , , , , , on July 24, 2022 by xi'an


When checking the infographics of the week highlighted by Nature, I came across this comparison of France and Germany for the impact of their respective vaccination mandates on health and economics. And then realised this was from a preprint from a Paris Dauphine colleague, Miquel Oliu-Barton (and co-authors). The above graphs compare the impact of governmental measures towards vaccination, short of compulsory vaccination (unfortunately).  Between Germany and France, it appears as if the measures were more effective in the latter. Which may be interpreted as either a consequence of the measures being more coercive in [unruly] France or an illustration of the higher discipline of the German society [despite the government contemplating compulsory vaccination for a while]. As an aside, I am very surprised at the higher death rate in Germany but, beside a larger percentage of people over 65 there and a lower life expectancy, the French curve is interrupted in December 2021. Looking at 2022, the peak was reached at 3.3 cases per day per million people.

Concerning the red counterfactual curves, I did not find much explanation in the preprint, apart from

“Our results are supported by the well-established econometric method of synthetic control.³⁰ We construct counterfactuals for each treated country based on a weighted average of countries that did not implement the COVID certificate and find consistent trajectories for the time period where this method is feasible, i.e., until the end of September 2021.”

and

“constructing counterfactuals ( i.e., by modelling vaccine uptake without this intervention), using innovation diffusion theory.⁶Innovation diffusion theory was introduced to model how new ideas and technologies spread”

which is not particularly helpful without further reading.

ABC at sea and at war

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , , , on July 18, 2017 by xi'an

While preparing crêpes at home yesterday night, I browsed through the  most recent issue of Significance and among many goodies, I spotted an article by McKay and co-authors discussing the simulation of a British vs. German naval battle from the First World War I had never heard of, the Battle of the Dogger Bank. The article was illustrated by a few historical pictures, but I quickly came across a more statistical description of the problem, which was not about creating wargames and alternate realities but rather inferring about the likelihood of the actual income, i.e., whether or not the naval battle outcome [which could be seen as a British victory, ending up with 0 to 1 sunk boat] was either a lucky strike or to be expected. And the method behind solving this question was indeed both Bayesian and ABC-esque! I did not read the longer paper by McKay et al. (hard to do while flipping crêpes!) but the description in Significance was clear enough to understand that the six summary statistics used in this ABC implementation were the number of shots, hits, and lost turrets for both sides. (The answer to the original question is that indeed the British fleet was lucky to keep all its boats afloat. But it is also unlikely another score would have changed the outcome of WWI.) [As I found in this other history paper, ABC seems quite popular in historical inference! And there is another completely unrelated arXived paper with main title The Fog of War…]

from statistical evidence to evidence of causality

Posted in Books, Statistics with tags , , , , , , , , , on December 24, 2013 by xi'an

I took the opportunity of having to wait at a local administration a long while today (!) to read an arXived paper by Dawid, Musio and Fienberg on the−both philosophical and practical−difficulty to establish the probabilities of the causes of effects. The first interesting thing about the paper is that it relates to the Médiator drug scandal that took place in France in the past year and still is under trial: thanks to the investigations of a local doctor, Irène Frachon, the drug was exposed as an aggravating factor for heart disease. Or maybe the cause. The case-control study of Frachon summarises into a 2×2 table with a corrected odds ratio of 17.1. From there, the authors expose the difficulties of drawing inference about causes of effects, i.e. causality, an aspect of inference that has always puzzled me. (And the paper led me to search for the distinction between odds ratio and risk ratio.)

“And the conceptual and implementational difficulties that we discuss below, that beset even the simplest case of inference about causes of effects, will be hugely magnified when we wish to take additional account of such policy considerations.”

A third interesting notion in the paper is the inclusion of counterfactuals. My introduction to counterfactuals dates back to a run in the back-country roads around Ithaca, New York, when George told me about a discussion paper from Phil he was editing for JASA on that notion with his philosopher neighbour Steven Schwartz as a discussant. (It was a great run, presumably in the late Spring. And the best introduction I could dream of!) Now, the paper starts from the counterfactual perspective to conclude that inference is close to impossible in this setting. Within my limited understanding, I would see that as a drawback of using counterfactuals, rather than of drawing inference about causes. If the corresponding statistical model is nonindentifiable, because one of the two responses is always missing, the model seems inappropriate. I am also surprised at the notion of “sufficiency” used in the paper, since it sounds like the background information cancels the need to account for the treatment (e.g., aspirin) decision.  The fourth point is the derivation of bounds on the probabilities of causation, despite everything! Quite an interesting read thus!

Uneducated guesses

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , on January 12, 2012 by xi'an

I received this book, Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies by Howard Wainer, from Princeton University Press for review in CHANCE. Alas, I am presumably one of the least likely adequate reviewers for the book in that

  • having done all of my academic training in France (except for my most useful post-doctoral training in Purdue and in Cornell), I never took any of those ACT/SAT/&tc tests (except for the GRE at the very end of my Ph.D. towards a post-doctoral grant I did not get!);
  • teaching in a French university, I never used any of those tests to compare undergraduate or graduates applicants;
  • I am very marginally aware of the hiring process in US universities at the undergraduate, even though I knew about the early admission policy;
  • there is no equivalent in the French high school system, given that high school students have to undergo a national week-long exam, le baccalauréat, to enter higher education and that most curricula actually decide on the basis of the high school record, prior to [but conditional on] the baccalauréat.

Thus, this review of Wainer’s Uneducated Guesses is to be taken with pinches (or even tablespoons) of salt. And to be opposed to other reviews. Esp. in Statistics journals (I could not find any).

My role in this parallels Spock’s when he explained `Nowhere am I so desperately needed as among a shipload of illogical humans.‘” (page 157)

First, the book is very pleasant to read, with a witty and whimsical way of pushing strong (and well-argued) opinions. Even as a complete bystander, I found the arguments advanced for keeping SAT as the preferential tool for student selection quite engaging, as were the later ones against teacher and college rankings equally making sense. So the book should appeal to a large chunk of the public, as prospective students, parents, high school teachers or college selection committees. (Scholars on entrance tests may already have seen the arguments since most of the chapter are based on earlier papers of  Howard Wainer.) Second, and this is yet another reason why I feel remote from the topic, the statistical part of the analysis is simply not covered in the book. There are tables and there are graphs, there are regressions and there are interpolation curves, there is a box-plot and there are normal densities, but I am missing a statistical model that would push us further than the common sense that permeates the whole book. After reading the book, my thirst about the modelling of education tests and ranking is thus far from being quenched! (Note I am not saying the author is ignorant of such matters, since he published in psychometrics, educational statistics and other statistics journals, and taught Statistics at Wharton. The technical side of the argument does exist, but it is not included in the book. The author refers to Gelman et al., 1995, and to the fruitful Bayesian approach on page 69.)

Continue reading

%d bloggers like this: