Archive for confounders

21w5107 [½day 3]

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , on December 2, 2021 by xi'an

Day [or half-day] three started without firecrackers and with David Rossell (formerly Warwick) presenting an empirical Bayes approach to generalised linear model choice with a high degree of confounding, using approximate Laplace approximations. With considerable improvements in the experimental RMSE. Making feeling sorry there was no apparent fully (and objective?) Bayesian alternative! (Two more papers on my reading list that I should have read way earlier!) Then Veronika Rockova discussed her work on approximate Metropolis-Hastings by classification. (With only a slight overlap with her One World ABC seminar.) Making me once more think of Geyer’s n⁰564 technical report, namely the estimation of a marginal likelihood by a logistic discrimination representation. Her ABC resolution replaces the tolerance step by an exponential of minus the estimated Kullback-Leibler divergence between the data density and the density associated with the current value of the parameter. (I wonder if there is a residual multiplicative constant there… Presumably not. Great idea!) The classification step need be run at every iteration, which could be sped up by subsampling.

On the always fascinating theme of loss based posteriors, à la Bissiri et al., Jack Jewson (formerly Warwick) exposed his work generalised Bayesian and improper models (from Birmingham!). Using data to decide between model and loss, which sounds highly unorthodox! First difficulty is that losses are unscaled. Or even not integrable after an exponential transform. Hence the notion of improper models. As in the case of robust Tukey’s loss, which is bounded by an arbitrary κ. Immediately I wonder if the fact that the pseudo-likelihood does not integrate is important beyond the (obvious) absence of a normalising constant. And the fact that this is not a generative model. And the answer came a few slides later with the use of the Hyvärinen score. Rather than the likelihood score. Which can itself be turned into a H-posterior, very cool indeed! Although I wonder at the feasibility of finding an [objective] prior on κ.

Rajesh Ranganath completed the morning session with a talk on [the difficulty of] connecting Bayesian models and complex prediction models. Using instead a game theoretic approach with Brier scores under censoring. While there was a connection with Veronika’s use of a discriminator as a likelihood approximation, I had trouble catching the overall message…

Nature snippets

Posted in Statistics with tags , , , , , , , , , , , , , on October 1, 2019 by xi'an

In the August 1 issue of Nature I took with me to Japan, there were many entries of interest. The first pages included a tribune (“personal take on events”) by a professor of oceanography calling for a stop to the construction of the TMT telescope on the Mauna Kea mountain. While I am totally ignorant of the conditions of this construction and in particular of the possible ecological effects on a fragile altitude environment, the tribune is fairly confusing invoking mostly communitarian and religious, rather than scientific ones. And referring to Western science and Protestant missionaries as misrepresenting a principle of caution. While not seeing the contradiction in suggesting the move of the observatory to the Canary Islands, which were (also) invaded by Spanish settlers in the 13th century.

Among other news, Indonesia following regional tendencies to nationalise research by forcing foreign researchers to have their data vetted by the national research agency and to include Indonesian nationals in their projects. And, although this now sounds stale news, the worry about the buffoonesque Prime Minister of the UK. And of the eugenic tendencies of his cunning advisor… A longer article by Patrick Riley from Google on three problems with machine learning, from splitting the data inappropriately (biases in the data collection) to hidden variables (unsuspected confounders) to mistaking the objective (impact of the loss function used to learn the predictive function). (Were these warnings heeded in the following paper claiming that deep learning was better at predicting kidney failures?)  Another paper of personal interest was reporting a successful experiment in Guangzhou, China, infecting tiger mosquitoes with a bacteria to make the wild population sterile. While tiger mosquitoes have reached the Greater Paris area,  and are thus becoming a nuisance, releasing 5 million more mosquitoes per week in the wild may not sound like the desired solution but since the additional mosquitoes are overwhelmingly male, we would not feel the sting of this measure! The issue also contained a review paper on memory editing for clinical treatment of psychopathology, which is part of the 150 years of Nature anniversary collection, but that I did not read (or else I forgot!)

from statistical evidence to evidence of causality

Posted in Books, Statistics with tags , , , , , , , , , on December 24, 2013 by xi'an

I took the opportunity of having to wait at a local administration a long while today (!) to read an arXived paper by Dawid, Musio and Fienberg on the−both philosophical and practical−difficulty to establish the probabilities of the causes of effects. The first interesting thing about the paper is that it relates to the Médiator drug scandal that took place in France in the past year and still is under trial: thanks to the investigations of a local doctor, Irène Frachon, the drug was exposed as an aggravating factor for heart disease. Or maybe the cause. The case-control study of Frachon summarises into a 2×2 table with a corrected odds ratio of 17.1. From there, the authors expose the difficulties of drawing inference about causes of effects, i.e. causality, an aspect of inference that has always puzzled me. (And the paper led me to search for the distinction between odds ratio and risk ratio.)

“And the conceptual and implementational difficulties that we discuss below, that beset even the simplest case of inference about causes of effects, will be hugely magnified when we wish to take additional account of such policy considerations.”

A third interesting notion in the paper is the inclusion of counterfactuals. My introduction to counterfactuals dates back to a run in the back-country roads around Ithaca, New York, when George told me about a discussion paper from Phil he was editing for JASA on that notion with his philosopher neighbour Steven Schwartz as a discussant. (It was a great run, presumably in the late Spring. And the best introduction I could dream of!) Now, the paper starts from the counterfactual perspective to conclude that inference is close to impossible in this setting. Within my limited understanding, I would see that as a drawback of using counterfactuals, rather than of drawing inference about causes. If the corresponding statistical model is nonindentifiable, because one of the two responses is always missing, the model seems inappropriate. I am also surprised at the notion of “sufficiency” used in the paper, since it sounds like the background information cancels the need to account for the treatment (e.g., aspirin) decision.  The fourth point is the derivation of bounds on the probabilities of causation, despite everything! Quite an interesting read thus!

%d bloggers like this: