Archive for sociology

my neighbourhood in the New Yorker

Posted in Books, pictures, Travel with tags , , , , , , , , , on October 14, 2020 by xi'an

While I was reading (part of) a recent issue of The New Yorker over breakfast, I was surprised to find my neighbourhood city of Bourg-la-Reine (twin city, Kenilworth!) mentioned in a tribune! It was about interviewing the local authors of a boardgame called Kapital! which is presented there as a form of (French) anti-Monopoly, authors who were both CNRS researchers in sociology until they retired. And who produced (even) more militant ouput since then, including this boardgame. The (unintended?) fun in the tribune is the opposition between the May 68 style class warfare denounced by the authors and their apparently well-off conditions (no BBQ in the street there!).

the end of statistics [not!]

Posted in Statistics with tags , , , , , , , , , , on January 31, 2017 by xi'an

endofstatsLast week I spotted this tribune in The Guardian, with the witty title of statistics loosing its power, and sort of over-reacted by trying to gather enough momentum from colleagues towards writing a counter-column. After a few days of decantation and a few more readings (reads?) of the tribune, I cooled down towards a more lenient perspective, even though I still dislike the [catastrophic and journalistic] title. The paper is actually mostly right (!), from its historical recap of the evolution of (official) statistics across centuries, to the different nature of the “big data” statistics. (The author is “William Davies, a sociologist and political economist. His books include The Limits of Neoliberalism and The Happiness Industry.”)

“Despite these criticisms, the aspiration to depict a society in its entirety, and to do so in an objective fashion, has meant that various progressive ideals have been attached to statistics.”

A central point is that public opinion has less confidence in (official) statistics than it used to be. (warning: Major understatement, here!) For many reasons, from numbers used to support any argument and its opposite, to statistics (-ians) being associated with experts, found at every corner of news and medias, hence with the “elite” arch-enemy, to a growing innumeracy of both the general public and of the said “elites”—like this “expert” in a debate about the 15th anniversary of the Euro currency on the French NPR last week equating a raise from 2.4 Francs to 6.5 Francs to 700%…—favouring rhetoric over facts, to a disintegration of the social structure that elevates one’s community over others and dismisses arguments from those others, especially those addressed at the entire society. The current debate—and the very fact there can even be a debate about it!—about post-truths and alternative facts is a sad illustration of this regression in the public discourse. The overall perspective in the tribune is one of a sociologist on statistics, but nothing to strongly object to.

“These data analysts are often physicists or mathematicians, whose skills are not developed for the study of society at all.”

The second part of the paper is about the perceived shift from (official) statistics to another and much more dangerous type of data analysis. Which is not a new view on the field, as shown by Weapons of Math Destruction. I tend to disagree with this perception that data handled by private companies for private purposes is inherently evil. The reticence in trusting the conclusions drawn from such datasets also extends to publicly available datasets and is not primarily linked to the lack of reproducibility of such analyses (which would be a perfectly rational argument!). It is neither due to physicists or mathematicians running those, instead of quantitative sociologists! The roots of the mistrust are rather to be found in an anti-scientism that has been growing in the past decades, in a paradox of an equally growing technological society fuelled by scientific advances. Hence, calling for a governmental office of big data or some similar institution is very much unlikely to solve the issue. I do not know what could, actually, but continuing to develop better statistical methodology cannot hurt!

can we trust computer simulations? [day #2]

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on July 13, 2015 by xi'an

Herrenhausen“Sometimes the models are better than the data.” G. Krinner

Second day at the conference on building trust in computer simulations. Starting with a highly debated issue, climate change projections. Since so many criticisms are addressed to climate models as being not only wrong but also unverifiable. And uncheckable. As explained by Gerhart Krinner, the IPCC has developed methodologies to compare models and evaluate predictions. However, from what I understood, this validation does not say anything about the future, which is the part of the predictions that matters. And that is attacked by critics and feeds climatic-skeptics. Because it is so easy to argue against the homogeneity of the climate evolution and for “what you’ve seen is not what you’ll get“! (Even though climatic-skeptics are the least likely to use this time-heterogeneity argument, being convinced as they are of the lack of human impact over the climate.)  The second talk was by Viktoria Radchuk about validation in ecology. Defined here as a test of predictions against independent data (and designs). And mentioning Simon Wood’s synthetic likelihood as the Bayesian reference for conducting model choice (as a synthetic likelihoods ratio). I had never thought of this use (found in Wood’s original paper) for synthetic likelihood, I feel a bit queasy about using a synthetic likelihood ratio as a genuine likelihood ratio. Which led to a lively discussion at the end of her talk. The next talk was about validation in economics by Matteo Richiardi, who discussed state-space models where the hidden state is observed through a summary statistic, perfect playground for ABC! But Matteo opted instead for a non-parametric approach that seems to increase imprecision and that I have never seen used in state-space models. The last part of the talk was about non-ergodic models, for which checking for validity becomes much more problematic, in my opinion. Unless one manages multiple observations of the non-ergodic path. Nicole Saam concluded this “Validation in…” morning with Validation in Sociology. With a more pessimistic approach to the possibility of finding a falsifying strategy, because of the vague nature of sociology models. For which data can never be fully informative. She illustrated the issue with an EU negotiation analysis. Where most hypotheses could hardly be tested.

“Bayesians persist with poor examples of randomness.” L. Smith

“Bayesians can be extremely reasonable.” L. Smith

The afternoon session was dedicated to methodology, mostly statistics! Andrew Robinson started with a talk on (frequentist) model validation. Called splitters and lumpers. Illustrated by a forest growth model. He went through traditional hypothesis tests like Neyman-Pearson’s that try to split between samples. And (bio)equivalence tests that take difference as the null. Using his equivalence R package. Then Leonard Smith took over [in a literal way!] from a sort-of-Bayesian perspective, in a work joint with Jim Berger and Gary Rosner on pragmatic Bayes which was mostly negative about Bayesian modelling. Introducing (to me) the compelling notion of structural model error as a representation of the inadequacy of the model. With illustrations from weather and climate models. His criticism of the Bayesian approach is that it cannot be holistic while pretending to be [my wording]. And being inadequate to measure model inadequacy, to the point of making prior choice meaningless. Funny enough, he went back to the ball dropping experiment David Higdon discussed at one JSM I attended a while ago, with the unexpected outcome that one ball did not make it to the bottom of the shaft. A more positive side was that posteriors are useful models but should not be interpreted from a probabilistic perspective. Move beyond probability was his final message. (For most of the talk, I misunderstood P(BS), the probability of a big surprise, for something else…) This was certainly the most provocative talk of the conference  and the discussion could have gone on for the rest of day! Somewhat, Lenny was voluntarily provocative in piling the responsibility upon the Bayesian’s head for being overconfident and not accounting for the physicist’ limitations in modelling the phenomenon of interest. Next talk was by Edward Dougherty on methods used in biology. He separated within-model uncertainty from outside-model inadequacy. The within model part is mostly easy to agree upon. Even though difficulties in estimating parameters creates uncertainty classes of models. Especially because of being from a small data discipline. He analysed the impact of machine learning techniques like classification as being useless without prior knowledge. And argued in favour of the Bayesian minimum mean square error estimator. Which can also lead to a classifier. And experimental design. (Using MSE seems rather reductive when facing large dimensional parameters.) Last talk of the day was by Nicolas Becu, a geographer, with a surprising approach to validation via stakeholders. A priori not too enticing a name! The discussion was of a more philosophical nature, going back to (re)define validation against reality and imperfect models. And including social aspects of validation, e.g., reality being socially constructed. This led to the stakeholders, because a model is then a shared representation. Nicolas illustrated the construction by simulation “games” of a collective model in a community of Thai farmers and in a group of water users.

In a rather unique fashion, we also had an evening discussion on points we share and points we disagreed upon. After dinner (and wine), which did not help I fear! Bill Oberkampf mentioned the use of manufactured solutions to check code, which seemed very much related to physics. But then we got mired into the necessity of dividing between verification and validation. Which sounded very and too much engineering-like to me. Maybe because I do not usually integrate coding errors and algorithmic errors into my reasoning (verification)… Although sharing code and making it available makes a big difference. Or maybe because considering all models are wrong is neither part of my methodology (validation). This part ended up in a fairly pessimistic conclusion on the lack of trust in most published articles. At least in the biological sciences.

latest interviews on the philosophy of religion(s)

Posted in Books, Kids with tags , , , , , , , , , , on November 1, 2014 by xi'an

“But is the existence of God just a philosophical question, like, say, the definition of knowledge or the existence of Plato’s forms?” Gary Gutting, NYT

Although I stopped following The Stone‘s interviews of philosophers about their views on religion, six more took place and Gary Gutting has now closed the series he started a while ago with a self-interview. On this occasion, I went quickly through the last interviews, which had the same variability in depth and appeal as the earlier ones. A lot of them were somewhat misplaced in trying to understand or justify the reasons for believing in a god (a.k.a., God), which sounds more appropriate for a psychology or sociology perspective. I presume that what I was expecting from the series was more a “science vs. religion” debate, rather than entries into the metaphysics of various religions… Continue reading