Archive for decision theory

Dutch book for sleeping beauty

Posted in Kids, Statistics, University life, Books with tags , , , , , , , , on May 15, 2017 by xi'an

After my short foray in Dutch book arguments two weeks ago in Harvard, I spotted a recent arXival by Vincent Conitzer analysing the sleeping beauty paradox from a Dutch book perspective. (The paper “A Dutch book against sleeping beauties who are evidential decision theorists” actually appeared in Synthese two years ago, which makes me wonder why it comes out only now on arXiv. And yes I am aware the above picture is about Bansky’s Cindirella and not sleeping beauty!)

“if Beauty is an evidential decision theorist, then in variants where she does not always have the same information available to her upon waking, she is vulnerable to Dutch books, regardless of whether she is a halfer or a thirder.”

As recalled in the introduction of the paper, there exist ways to construct Dutch book arguments against thirders and halfers alike. Conitzer constructs a variant that also distinguishes between a causal and an evidential decision theorist (sleeping beauty), the later being susceptible to another Dutch book. Which is where I get lost as I have no idea of a distinction between those two types of decision theory. Quickly checking on Wikipedia returned the notion that the latter decision theory maximises the expected utility conditional on the decision, but this does not clarify the issue in that it seems to imply the decision impacts the probability of the event… Hence keeping me unable to judge of the relevance of the arguments therein (which is no surprise since only based on a cursory read).

round-table on Bayes[ian[ism]]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on March 7, 2017 by xi'an

In a [sort of] coincidence, shortly after writing my review on Le bayésianisme aujourd’hui, I got invited by the book editor, Isabelle Drouet, to take part in a round-table on Bayesianism in La Sorbonne. Which constituted the first seminar in the monthly series of the séminaire “Probabilités, Décision, Incertitude”. Invitation that I accepted and honoured by taking place in this public debate (if not dispute) on all [or most] things Bayes. Along with Paul Egré (CNRS, Institut Jean Nicod) and Pascal Pernot (CNRS, Laboratoire de chimie physique). And without a neuroscientist, who could not or would not attend.

While nothing earthshaking came out of the seminar, and certainly not from me!, it was interesting to hear of the perspectives of my philosophy+psychology and chemistry colleagues, the former explaining his path from classical to Bayesian testing—while mentioning trying to read the book Statistical rethinking reviewed a few months ago—and the later the difficulty to teach both colleagues and students the need for an assessment of uncertainty in measurements. And alluding to GUM, developed by the Bureau International des Poids et Mesures I visited last year. I tried to present my relativity viewpoints on the [relative] nature of the prior, to avoid the usual morass of debates on the nature and subjectivity of the prior, tried to explain Bayesian posteriors via ABC, mentioned examples from The Theorem that Would not Die, yet untranslated into French, and expressed reserves about the glorious future of Bayesian statistics as we know it. This seminar was fairly enjoyable, with none of the stress induced by the constraints of a radio-show. Just too bad it did not attract a wider audience!

le bayésianisme aujourd’hui [book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on March 4, 2017 by xi'an

It is quite rare to see a book published in French about Bayesian statistics and even rarer to find one that connects philosophy of science, foundations of probability, statistics, and applications in neurosciences and artificial intelligence. Le bayésianisme aujourd’hui (Bayesianism today) was edited by Isabelle Drouet, a Reader in Philosophy at La Sorbonne. And includes a chapter of mine on the basics of Bayesian inference (à la Bayesian Choice), written in French like the rest of the book.

The title of the book is rather surprising (to me) as I had never heard the term Bayesianism mentioned before. As shown by this link, the term apparently exists. (Even though I dislike the sound of it!) The notion is one of a probabilistic structure of knowledge and learning, à la Poincaré. As described in the beginning of the book. But I fear the arguments minimising the subjectivity of the Bayesian approach should not be advanced, following my new stance on the relativity of probabilistic statements, if only because they are defensive and open the path all too easily to counterarguments. Similarly, the argument according to which the “Big Data” era makesp the impact of the prior negligible and paradoxically justifies the use of Bayesian methods is limited to the case of little Big Data, i.e., when the observations are more or less iid with a limited number of parameters. Not when the number of parameters explodes. Another set of arguments that I find both more modern and compelling [for being modern is not necessarily a plus!] is the ease with which the Bayesian framework allows for integrative and cooperative learning. Along with its ultimate modularity, since each component of the learning mechanism can be extracted and replaced with an alternative. Continue reading

sleeping beauty

Posted in Books, Kids, Statistics with tags , , , , , , , , , on December 24, 2016 by xi'an

Through X validated, W. Huber made me aware of this probability paradox [or para-paradox] of which I had never heard before. One of many guises of this paradox goes as follows:

Shahrazad is put to sleep on Sunday night. Depending on the hidden toss of a fair coin, she is awaken either once (Heads) or twice (Tails). After each awakening, she gets back to sleep and forget that awakening. When awakened, what should her probability of Heads be?

My first reaction is to argue that Shahrazad does not gain information between the time she goes to sleep when the coin is fair and the time(s) she is awaken, apart from being awaken, since she does not know how many times she has been awaken, so the probability of Heads remains ½. However, when thinking more about it on my bike ride to work, I thought of the problem as a decision theory or betting problem, which makes ⅓ the optimal answer.

I then read [if not the huge literature] a rather extensive analysis of the paradox by Ciweski, Kadane, Schervish, Seidenfeld, and Stern (CKS³), which concludes at roughly the same thing, namely that, when Monday is completely exchangeable with Tuesday, meaning that no event can bring any indication to Shahrazad of which day it is, the posterior probability of Heads does not change (Corollary 1) but that a fair betting strategy is p=1/3, with the somewhat confusing remark by CKS³ that this may differ from her credence. But then what is the point of the experiment? Or what is the meaning of credence? If Shahrazad is asked for an answer, there must be a utility or a penalty involved otherwise she could as well reply with a probability of p=-3.14 or p=10.56… This makes for another ill-defined aspect of the “paradox”.

Another remark about this ill-posed nature of the experiment is that, when imagining running an ABC experiment, I could only come with one where the fair coin is thrown (Heads or Tails) and a day (Monday or Tuesday) is chosen at random. Then every proposal (Heads or Tails) is accepted as an awakening, hence the posterior on Heads is the uniform prior. The same would not occurs if we consider the pair of awakenings under Tails as two occurrences of (p,E), but this does not sound (as) correct since Shahrazad only knows of one E: to paraphrase Jeffreys, this is an unobservable result that may have not occurred. (Or in other words, Bayesian learning is not possible on Groundhog Day!)

going to war [a riddle]

Posted in Books, Kids, Statistics with tags , , , , , on December 16, 2016 by xi'an

On the Riddler this week, a seemingly obvious riddle:

A game consists of Alice and Bob, each with a $1 bill, receiving a U(0,1) strength each, unknown to the other, and deciding or not to bet on this strength being larger than the opponent’s. If no player bets, they both keep their $1 bill. Else, the winner leaves with both bills. Find the optimal strategy.

As often when “optimality” is mentioned, the riddle is unclear because, when looking at the problem from a decision-theoretic perspective, the loss function of each player is not defined in the question. But the St. Petersburg paradox shows the type of loss clearly matters and the utility of money is anything but linear for large values, as explained by Daniel Bernoulli in 1738 (and later analysed by Laplace in his Essai Philosophique).  Let us assume therefore that both players live in circumstances when losing or winning $1 makes little difference, hence when the utility is linear. A loss function attached to the experiment for Alice [and a corresponding utility function for Bob] could then be a function of (a,b), the result of both Uniform draws, and of the decisions δ¹ and δ² of both players as being zero if δ¹=δ²=0 and

L(a,b,\delta^1,\delta^2)=\begin{cases}0&\text{if }\delta^1=\delta^2=0\\\mathbb{I}(a<b)-\mathbb{I}(a>b)&\text{else}\\\end{cases}

Considering this loss function, Alice aims at minimising the expected loss by her choice of δ¹, equal to zero or one, expected loss that hence depends  on the unknown and simultaneous decision of Bob. If for instance Alice assumes Bob takes the decision to compete when observing an outcome b larger than a certain bound α, her decision is based on the comparison of (when B is Uniform (0,1))

\mathbb{P}(a<B,B>\alpha)-\mathbb{P}(a>B,B>\alpha)=2(1-a\vee\alpha)-(1-\alpha)

(if δ¹=0) and of 1-2a (if δ¹=1). Comparing both expected losses leads to Alice competing (δ¹=1) when a>α/2.

However, there is no reason Alice should know the value of α when playing the (single) game and so she may think that Bob will follow the same reasoning, leading him to choosing a new bound of α/4, and, by iterating the thought process, down all the way to α=0!  So this modelling leads to always play the game, with each player having a ½ probability to win… Alternatively, Alice may set a prior on α, which leads to another bound on a for playing or not the game. Which in itself is not satisfactory either. (The published solution is following the above argument. Except for posting the maths expressions.)

contemporary issues in hypothesis testing

Posted in Statistics with tags , , , , , , , , , , , , , , , , , , on September 26, 2016 by xi'an

hipocontemptThis week [at Warwick], among other things, I attended the CRiSM workshop on hypothesis testing, giving the same talk as at ISBA last June. There was a most interesting and unusual talk by Nick Chater (from Warwick) about the psychological aspects of hypothesis testing, namely about the unnatural features of an hypothesis in everyday life, i.e., how far this formalism stands from human psychological functioning.  Or what we know about it. And then my Warwick colleague Tom Nichols explained how his recent work on permutation tests for fMRIs, published in PNAS, testing hypotheses on what should be null if real data and getting a high rate of false positives, got the medical imaging community all up in arms due to over-simplified reports in the media questioning the validity of 15 years of research on fMRI and the related 40,000 papers! For instance, some of the headings questioned the entire research in the area. Or transformed a software bug missing the boundary effects into a major flaw.  (See this podcast on Not So Standard Deviations for a thoughtful discussion on the issue.) One conclusion of this story is to be wary of assertions when submitting a hot story to journals with a substantial non-scientific readership! The afternoon talks were equally exciting, with Andrew explaining to us live from New York why he hates hypothesis testing and prefers model building. With the birthday model as an example. And David Draper gave an encompassing talk about the distinctions between inference and decision, proposing a Jaynes information criterion and illustrating it on Mendel‘s historical [and massaged!] pea dataset. The next morning, Jim Berger gave an overview on the frequentist properties of the Bayes factor, with in particular a novel [to me] upper bound on the Bayes factor associated with a p-value (Sellke, Bayarri and Berger, 2001)

B¹⁰(p) ≤ 1/-e p log p

with the specificity that B¹⁰(p) is not testing the original hypothesis [problem] but a substitute where the null is the hypothesis that p is uniformly distributed, versus a non-parametric alternative that p is more concentrated near zero. This reminded me of our PNAS paper on the impact of summary statistics upon Bayes factors. And of some forgotten reference studying Bayesian inference based solely on the p-value… It is too bad I had to rush back to Paris, as this made me miss the last talks of this fantastic workshop centred on maybe the most important aspect of statistics!

comments on Watson and Holmes

Posted in Books, pictures, Statistics, Travel with tags , , , , , , , , , on April 1, 2016 by xi'an


“The world is full of obvious things which nobody by any chance ever observes.” The Hound of the Baskervilles

In connection with the incoming publication of James Watson’s and Chris Holmes’ Approximating models and robust decisions in Statistical Science, Judith Rousseau and I wrote a discussion on the paper that has been arXived yesterday.

“Overall, we consider that the calibration of the Kullback-Leibler divergence remains an open problem.” (p.18)

While the paper connects with earlier ones by Chris and coauthors, and possibly despite the overall critical tone of the comments!, I really appreciate the renewed interest in robustness advocated in this paper. I was going to write Bayesian robustness but to differ from the perspective adopted in the 90’s where robustness was mostly about the prior, I would say this is rather a Bayesian approach to model robustness from a decisional perspective. With definitive innovations like considering the impact of posterior uncertainty over the decision space, uncertainty being defined e.g. in terms of Kullback-Leibler neighbourhoods. Or with a Dirichlet process distribution on the posterior. This may step out of the standard Bayesian approach but it remains of definite interest! (And note that this discussion of ours [reluctantly!] refrained from capitalising on the names of the authors to build easy puns linked with the most Bayesian of all detectives!)