“Certains d’entre nous, lauréats du prix Nobel d’économie, ont été cités par des candidats à l’élection présidentielle française, notamment par Marine Le Pen et ses équipes, pour justifier un programme politique sur la question de l’Europe. Les signataires de cette lettre ont des positions différentes sur les sujets complexes que sont l’union monétaire et les politiques de relance. Cependant, nos opinions convergent pour condamner cette instrumentalisation de la pensée économique dans le cadre de la campagne électorale française.
– La construction européenne est capitale non seulement pour maintenir la paix sur le continent mais également pour le progrès économique des Etats membres et leur pouvoir politique dans le monde.
– Les évolutions proposées par les programmes antieuropéens déstabiliseraient la France et remettraient en cause la coopération entre pays européens, qui assure aujourd’hui une stabilité économique et politique en Europe.
– Les politiques isolationnistes et protectionnistes et les dévaluations compétitives, toutes menées au détriment des autres pays, sont de dangereux moyens d’essayer de générer de la croissance. Elles entraînent des mesures de représailles et des guerres commerciales. Au final, elles se révéleront préjudiciables à la France ainsi qu’à ses partenaires commerciaux.
– Quand ils sont bien intégrés au marché du travail, les migrants peuvent être une opportunité économique pour le pays d’accueil. Plusieurs des pays les plus prospères au monde ont su accueillir et intégrer les émigrés.
– Il y a une grande différence entre choisir de ne pas rejoindre l’euro en premier lieu et en sortir après l’avoir adopté.
– Il faut renouveler les engagements de justice sociale, et ainsi garantir et développer l’équité et la protection sociale, en accord avec les valeurs traditionnelles de la France, de liberté, d’égalité et de fraternité. Mais l’on peut et l’on doit parvenir à cette protection sociale sans protectionnisme économique.
– Alors que l’Europe et le monde font face à des épreuves sans précédent, il faut plus de solidarité, pas moins. Les problèmes sont trop sérieux pour être confiés à des politiciens clivants.”
Angus Deaton (Princeton, prix Nobel en 2015), Peter Diamond (MIT, 2010), Robert Engle (NYU, 2003), Eugene Fama (Chicago, 2013), Lars Hansen (Chicago, 2013), Oliver Hart (Harvard, 2016), Bengt Holmström (MIT, 2016), Daniel Kahneman (Princeton, 2002), Finn Kydland (CMU, 2004), Eric Maskin (Harvard, 2007), Daniel McFadden (Berkeley, 2000), James Mirrlees (Cambridge, 1996), Robert Mundell (Columbia, 1999), Roger Myerson (Chicago, 2007), Edmund Phelps (Columbia, 2005), Chris Pissarides (LSE, 2010), Alvin Roth (Stanford, 2012), Amartya Sen (Harvard, 1998), William Sharpe (Stanford, 1990), Robert Shiller (Yale, 2013), Christopher Sims (Princeton, 2011), Robert Solow (Columbia, 1987), Michael Spence (Stanford, 2001), Joseph Stiglitz (Columbia, 2001), Jean Tirole (Toulouse School of Economics, 2014)
Archive for economics
Believe it or not, I had never read Freakonomics..! Therefore, when I saw the book on sale for a negligible price in Dehli Airport—great airport by the way!—, I went for it. Having now read a fair chunk of the book (or unfair chunk, see below!) within two days (during my metro rides), I am rather disappointed by the content and thus puzzled by the then-craze about the book. (Andrew just loved it!) Freakonomics is certainly a well-designed product from a salesman perspective and it thus reads pleasantly enough, but I find it remains at too much of a superficial level. In addition, it sometimes sounds as if the authors have a hidden agenda (more later)! (Now, of course, my reaction of the book is completely irrelevant as it comes very late after the publication in 2005. So, reader, stop here if you do not want to waste time any further!)
“…if the death penalty were assessed to anyone carrying an illegal gun, and if the penalty were actually enforced, gun crimes would surely plunge.” (p.118)
To wit, the way the book is written sounds much more journalistic than academic: the authors take an economic study or paper about an unusual (freakish) topic and weave a nice story around it, always with the intent of showing “conventional wisdom” is wrong. Since this is a general public book, there is no theory behind the story and it all seems to flow from “common sense”: yes, most drug dealers do not earn enough to make a living because the corporate structure of the drug economy is highly hierarchical and as highly biased towards higher levels. The only foray into theory, namely the discussion about factors impacting kids success rate at school, casts doubts about regression and the distinction between causation and correlation is never truly investigated (even though the mantra correlation is not causation is found therein often enough!). Moreover, by resorting to the journalistic trick of making everything very personal (so-and-so went to drug dealer housing projects for six years, so-and-so decided to re-analyse the school records in Chicago, &tc.), the authors actually lower the credentials of their theories. If so-and-so found this effect, maybe there is another or an hundred others so-and-so going the opposite way! But those others are not mentioned as the book retains this flatland and Unitarian perspective… And the conclusions are anti-climactic: when so-and-so gets hold of the ledgers of a crack dealing gang, the description stops at reporting the hierarchical structure of the organisation and the revenues of the different levels. No major theory appears to be tested. At least within the book.
“Given the number of handguns in the United States (…), the probability that a particular gun was used to kill someone that year is 1 in 10,000. The typical gun buyback yields fewer than 1,000 guns—which translates into an expectation of less than one-tenth of one homicide per buyback.” (p.121)
The above quote puzzled me for a while, until I formalised the experiment as an hypergeometric draw of 1000 guns from a population of almost 300 millions guns, out of which more than 10,000 are crime guns. (The probability that a particular gun is used for a crime is then 1 in 30,000.) And the probability to draw at least one of those guns in one buyback is then approximately 0.03. But this seems to miss the other side of the equation, namely the worth of a human life. (Not that I believe that gun buybacks are particularly effective since, as noted by the authors, they mostly attract “heirldom or junk” (p.121).)
A minor disappointment was to stumble upon the conclusion of the book in…its very middle! I first thought I was confused and this was only the conclusion to a section but no, the second half of the book as I bought it was made of extracts from the authors’ column in the New York Time and of their blog, getting close to a swindle in my opinion! Or at least being the unfair chunk mentioned above. I also find annoying (and so does Andrew!) this insistence upon being rogue economists, as advertised on the front cover of the book, as the authors have shown themselves to be very efficient economists by turning the freakonomics idea into a whole business: books, films, videos, lectures, &tc. Nothing to complain about, except for the rogue label. (Note that they should have registered the franchise as well, given the subsequent profusion of -omics books and sites, from the fantastic Freakonometrics blog of my former colleague Arthur Charpentier, to Soccernomics I recently bought for my son…)
Gérard Biau, Frédéric Cérou, and Arnaud Guyader recently posted an arXiv paper on the foundations of ABC, entitled “New insights into Approximate Bayesian Computation“. They also submitted it to several statistics journals, with no success so far, and I find this rather surprising. Indeed, the paper analyses the ABC algorithm the way it is truly implemented (as in DIYABC for instance), i.e. with a tolerance bound ε that is determined as a quantile of the simulated distances, say the 10% or the 1% quantile. This means in particular that the interpretation of ε as a non-parametric bandwidth, while interesting and prevalent in the literature (see, e.g., Fearnhead and Prangle’s discussion paper), is only an approximation of the actual practice.
The authors of this new paper focus on the mathematical foundations of this practice, by (re)analysing ABC as a k-nearest neighbour (knn) method. Using generic knn results, they thus derive a consistency property for the ABC algorithm by imposing some constraints upon the rate of decrease of the quantile as a function of n. (The setting is restricted to the use of sufficient statistics or, equivalently, to a distance over the whole sample. The issue of summary statistics is not addressed by the paper.) The paper also contains a perfectly rigorous proof (the first one?) of the convergence of ABC when the tolerance ε goes to zero. The mean integrated square error consistency of the conditional kernel density estimate is established for a generic kernel (under usual assumptions). Further assumptions (on the target and on the kernel) allow the authors to obtain precise convergence rates (as a power of the sample size), derived from classical k-nearest neighbour regression, like
in dimensions m larger than 4…. The paper is completely theoretical and highly mathematical (with 25 pages of proofs!), which may explain why it did not meet with success with editors and/or referees, however I definitely think (an abridged version of) this work clearly deserves publication in a top statistics journal as a reference for the justification of ABC! The authors also mention future work in that direction: I would strongly suggest they consider the case of the insufficient summary statistics from this knn perspective.
“Statistical significance is not a scientific test. It is a philosophical, qualitative test. It asks “whether”. Existence, the question of whether, is interesting. But it is not scientific.” S. Ziliak and D. McCloskey, p.5
The book, written by economists Stephen Ziliak and Deirdre McCloskey, has a theme bound to attract Bayesians and all those puzzled by the absolute and automatised faith in significance tests. The main argument of the authors is indeed that an overwhelming majority of papers stop at rejecting variables (“coefficients”) on the sole and unsupported basis of non-significance at the 5% level. Hence the subtitle “How the standard error costs us jobs, justice, and lives“… This is an argument I completely agree with, however, the aggressive style of the book truly put me off! As with Error and Inference, which also addresses a non-Bayesian issue, I could have let the matter go, however I feel the book may in the end be counter-productive and thus endeavour to explain why through this review. (I wrote the following review in batches, before and during my trip to Dublin, so the going is rather broken, I am afraid…) Continue reading