Archive for decision theory

better together?

Posted in Mountains, Statistics, University life, Books, pictures with tags , , , , , , , , on August 31, 2017 by xi'an

Yesterday came out on arXiv a joint paper by Pierre Jacob, Lawrence Murray, Chris Holmes and myself, Better together? Statistical learning in models made of modules, paper that was conceived during the MCMski meeting in Chamonix, 2014! Indeed it is mostly due to Martyn Plummer‘s talk at this meeting about the cut issue that we started to work on this topic at the fringes of the [standard] Bayesian world. Fringes because a standard Bayesian approach to the problem would always lead to use the entire dataset and the entire model to infer about a parameter of interest. [Disclaimer: the use of the very slogan of the anti-secessionists during the Scottish Independence Referendum of 2014 in our title is by no means a measure of support of their position!] Comments and suggested applications most welcomed!

The setting of the paper is inspired by realistic situations where a model is made of several modules, connected within a graphical model that represents the statistical dependencies, each relating to a specific data modality. In a standard Bayesian analysis, given data, a conventional statistical update then allows for coherent uncertainty quantification and information propagation through and across the modules. However, misspecification of or even massive uncertainty about any module in the graph can contaminate the estimate and update of parameters of other modules, often in unpredictable ways. Particularly so when certain modules are trusted more than others. Hence the appearance of cut models, where practitioners  prefer skipping the full model and limit the information propagation between these modules, for example by restricting propagation to only one direction along the edges of the graph. (Which is sometimes represented as a diode on the edge.) The paper investigates in which situations and under which formalism such modular approaches can outperform the full model approach in misspecified settings. By developing the appropriate decision-theoretic framework. Meaning we can choose between [several] modular and full-model approaches.

p-values and decision-making [reposted]

Posted in Statistics, University life, Books with tags , , , , , , , , , , on August 30, 2017 by xi'an

In a letter to Significance about a review of Robert Matthews’s book, Chancing it, Nicholas Longford recalls a few basic facts about p-values and decision-making earlier made by Dennis Lindley in Making Decisions. Here are some excerpts, worth repeating in the light of the 0.005 proposal:

“A statement of significance based on a p-value is a verdict that is oblivious to consequences. In my view, this disqualifies hypothesis testing, and p-values with it, from making rational decisions. Of course, the p-value could be supplemented by considerations of these consequences, although this is rarely done in a transparent manner. However, the two-step procedure of calculating the p-value and then incorporating the consequences is unlikely to match in its integrity the single-stage procedure in which we compare the expected losses associated with the two contemplated options.”

“At present, [Lindley’s] decision-theoretical approach is difficult to implement in practice. This is not because of any computational complexity or some problematic assumptions, but because of our collective reluctance to inquire about the consequences – about our clients’ priorities, remits and value judgements. Instead, we promote a culture of “objective” analysis, epitomised by the 5% threshold in significance testing. It corresponds to a particular balance of consequences, which may or may not mirror our clients’ perspective.”

“The p-value and statistical significance are at best half-baked products in the process of making decisions, and a distraction at worst, because the ultimate conclusion of a statistical analysis should be a proposal for what to do next in our clients’ or our own research, business, production or some other agenda. Let’s reflect and admit how frequently we abuse hypothesis testing by adopting (sometimes by stealth) the null hypothesis when we fail to reject it, and therefore do so without any evidence to support it. How frequently we report, or are party to reporting, the results of hypothesis tests selectively. The problem is not with our failing to adhere to the convoluted strictures of a popular method, but with the method itself. In the 1950s, it was a great statistical invention, and its popularisation later on a great scientific success. Alas, decades later, it is rather out of date, like the steam engine. It is poorly suited to the demands of modern science, business, and society in general, in which the budget and pocketbook are important factors.”

Dutch book for sleeping beauty

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , on May 15, 2017 by xi'an

After my short foray in Dutch book arguments two weeks ago in Harvard, I spotted a recent arXival by Vincent Conitzer analysing the sleeping beauty paradox from a Dutch book perspective. (The paper “A Dutch book against sleeping beauties who are evidential decision theorists” actually appeared in Synthese two years ago, which makes me wonder why it comes out only now on arXiv. And yes I am aware the above picture is about Bansky’s Cindirella and not sleeping beauty!)

“if Beauty is an evidential decision theorist, then in variants where she does not always have the same information available to her upon waking, she is vulnerable to Dutch books, regardless of whether she is a halfer or a thirder.”

As recalled in the introduction of the paper, there exist ways to construct Dutch book arguments against thirders and halfers alike. Conitzer constructs a variant that also distinguishes between a causal and an evidential decision theorist (sleeping beauty), the later being susceptible to another Dutch book. Which is where I get lost as I have no idea of a distinction between those two types of decision theory. Quickly checking on Wikipedia returned the notion that the latter decision theory maximises the expected utility conditional on the decision, but this does not clarify the issue in that it seems to imply the decision impacts the probability of the event… Hence keeping me unable to judge of the relevance of the arguments therein (which is no surprise since only based on a cursory read).

round-table on Bayes[ian[ism]]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on March 7, 2017 by xi'an

In a [sort of] coincidence, shortly after writing my review on Le bayésianisme aujourd’hui, I got invited by the book editor, Isabelle Drouet, to take part in a round-table on Bayesianism in La Sorbonne. Which constituted the first seminar in the monthly series of the séminaire “Probabilités, Décision, Incertitude”. Invitation that I accepted and honoured by taking place in this public debate (if not dispute) on all [or most] things Bayes. Along with Paul Egré (CNRS, Institut Jean Nicod) and Pascal Pernot (CNRS, Laboratoire de chimie physique). And without a neuroscientist, who could not or would not attend.

While nothing earthshaking came out of the seminar, and certainly not from me!, it was interesting to hear of the perspectives of my philosophy+psychology and chemistry colleagues, the former explaining his path from classical to Bayesian testing—while mentioning trying to read the book Statistical rethinking reviewed a few months ago—and the later the difficulty to teach both colleagues and students the need for an assessment of uncertainty in measurements. And alluding to GUM, developed by the Bureau International des Poids et Mesures I visited last year. I tried to present my relativity viewpoints on the [relative] nature of the prior, to avoid the usual morass of debates on the nature and subjectivity of the prior, tried to explain Bayesian posteriors via ABC, mentioned examples from The Theorem that Would not Die, yet untranslated into French, and expressed reserves about the glorious future of Bayesian statistics as we know it. This seminar was fairly enjoyable, with none of the stress induced by the constraints of a radio-show. Just too bad it did not attract a wider audience!

le bayésianisme aujourd’hui [book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on March 4, 2017 by xi'an

It is quite rare to see a book published in French about Bayesian statistics and even rarer to find one that connects philosophy of science, foundations of probability, statistics, and applications in neurosciences and artificial intelligence. Le bayésianisme aujourd’hui (Bayesianism today) was edited by Isabelle Drouet, a Reader in Philosophy at La Sorbonne. And includes a chapter of mine on the basics of Bayesian inference (à la Bayesian Choice), written in French like the rest of the book.

The title of the book is rather surprising (to me) as I had never heard the term Bayesianism mentioned before. As shown by this link, the term apparently exists. (Even though I dislike the sound of it!) The notion is one of a probabilistic structure of knowledge and learning, à la Poincaré. As described in the beginning of the book. But I fear the arguments minimising the subjectivity of the Bayesian approach should not be advanced, following my new stance on the relativity of probabilistic statements, if only because they are defensive and open the path all too easily to counterarguments. Similarly, the argument according to which the “Big Data” era makesp the impact of the prior negligible and paradoxically justifies the use of Bayesian methods is limited to the case of little Big Data, i.e., when the observations are more or less iid with a limited number of parameters. Not when the number of parameters explodes. Another set of arguments that I find both more modern and compelling [for being modern is not necessarily a plus!] is the ease with which the Bayesian framework allows for integrative and cooperative learning. Along with its ultimate modularity, since each component of the learning mechanism can be extracted and replaced with an alternative. Continue reading

sleeping beauty

Posted in Books, Kids, Statistics with tags , , , , , , , , , on December 24, 2016 by xi'an

Through X validated, W. Huber made me aware of this probability paradox [or para-paradox] of which I had never heard before. One of many guises of this paradox goes as follows:

Shahrazad is put to sleep on Sunday night. Depending on the hidden toss of a fair coin, she is awaken either once (Heads) or twice (Tails). After each awakening, she gets back to sleep and forget that awakening. When awakened, what should her probability of Heads be?

My first reaction is to argue that Shahrazad does not gain information between the time she goes to sleep when the coin is fair and the time(s) she is awaken, apart from being awaken, since she does not know how many times she has been awaken, so the probability of Heads remains ½. However, when thinking more about it on my bike ride to work, I thought of the problem as a decision theory or betting problem, which makes ⅓ the optimal answer.

I then read [if not the huge literature] a rather extensive analysis of the paradox by Ciweski, Kadane, Schervish, Seidenfeld, and Stern (CKS³), which concludes at roughly the same thing, namely that, when Monday is completely exchangeable with Tuesday, meaning that no event can bring any indication to Shahrazad of which day it is, the posterior probability of Heads does not change (Corollary 1) but that a fair betting strategy is p=1/3, with the somewhat confusing remark by CKS³ that this may differ from her credence. But then what is the point of the experiment? Or what is the meaning of credence? If Shahrazad is asked for an answer, there must be a utility or a penalty involved otherwise she could as well reply with a probability of p=-3.14 or p=10.56… This makes for another ill-defined aspect of the “paradox”.

Another remark about this ill-posed nature of the experiment is that, when imagining running an ABC experiment, I could only come with one where the fair coin is thrown (Heads or Tails) and a day (Monday or Tuesday) is chosen at random. Then every proposal (Heads or Tails) is accepted as an awakening, hence the posterior on Heads is the uniform prior. The same would not occurs if we consider the pair of awakenings under Tails as two occurrences of (p,E), but this does not sound (as) correct since Shahrazad only knows of one E: to paraphrase Jeffreys, this is an unobservable result that may have not occurred. (Or in other words, Bayesian learning is not possible on Groundhog Day!)

going to war [a riddle]

Posted in Books, Kids, Statistics with tags , , , , , on December 16, 2016 by xi'an

On the Riddler this week, a seemingly obvious riddle:

A game consists of Alice and Bob, each with a $1 bill, receiving a U(0,1) strength each, unknown to the other, and deciding or not to bet on this strength being larger than the opponent’s. If no player bets, they both keep their $1 bill. Else, the winner leaves with both bills. Find the optimal strategy.

As often when “optimality” is mentioned, the riddle is unclear because, when looking at the problem from a decision-theoretic perspective, the loss function of each player is not defined in the question. But the St. Petersburg paradox shows the type of loss clearly matters and the utility of money is anything but linear for large values, as explained by Daniel Bernoulli in 1738 (and later analysed by Laplace in his Essai Philosophique).  Let us assume therefore that both players live in circumstances when losing or winning $1 makes little difference, hence when the utility is linear. A loss function attached to the experiment for Alice [and a corresponding utility function for Bob] could then be a function of (a,b), the result of both Uniform draws, and of the decisions δ¹ and δ² of both players as being zero if δ¹=δ²=0 and

L(a,b,\delta^1,\delta^2)=\begin{cases}0&\text{if }\delta^1=\delta^2=0\\\mathbb{I}(a<b)-\mathbb{I}(a>b)&\text{else}\\\end{cases}

Considering this loss function, Alice aims at minimising the expected loss by her choice of δ¹, equal to zero or one, expected loss that hence depends  on the unknown and simultaneous decision of Bob. If for instance Alice assumes Bob takes the decision to compete when observing an outcome b larger than a certain bound α, her decision is based on the comparison of (when B is Uniform (0,1))

\mathbb{P}(a<B,B>\alpha)-\mathbb{P}(a>B,B>\alpha)=2(1-a\vee\alpha)-(1-\alpha)

(if δ¹=0) and of 1-2a (if δ¹=1). Comparing both expected losses leads to Alice competing (δ¹=1) when a>α/2.

However, there is no reason Alice should know the value of α when playing the (single) game and so she may think that Bob will follow the same reasoning, leading him to choosing a new bound of α/4, and, by iterating the thought process, down all the way to α=0!  So this modelling leads to always play the game, with each player having a ½ probability to win… Alternatively, Alice may set a prior on α, which leads to another bound on a for playing or not the game. Which in itself is not satisfactory either. (The published solution is following the above argument. Except for posting the maths expressions.)