sleeping beauty
Through X validated, W. Huber made me aware of this probability paradox [or para-paradox] of which I had never heard before. One of many guises of this paradox goes as follows:
Shahrazad is put to sleep on Sunday night. Depending on the hidden toss of a fair coin, she is awaken either once (Heads) or twice (Tails). After each awakening, she gets back to sleep and forget that awakening. When awakened, what should her probability of Heads be?
My first reaction is to argue that Shahrazad does not gain information between the time she goes to sleep when the coin is fair and the time(s) she is awaken, apart from being awaken, since she does not know how many times she has been awaken, so the probability of Heads remains ½. However, when thinking more about it on my bike ride to work, I thought of the problem as a decision theory or betting problem, which makes ⅓ the optimal answer.
I then read [if not the huge literature] a rather extensive analysis of the paradox by Ciweski, Kadane, Schervish, Seidenfeld, and Stern (CKS³), which concludes at roughly the same thing, namely that, when Monday is completely exchangeable with Tuesday, meaning that no event can bring any indication to Shahrazad of which day it is, the posterior probability of Heads does not change (Corollary 1) but that a fair betting strategy is p=1/3, with the somewhat confusing remark by CKS³ that this may differ from her credence. But then what is the point of the experiment? Or what is the meaning of credence? If Shahrazad is asked for an answer, there must be a utility or a penalty involved otherwise she could as well reply with a probability of p=-3.14 or p=10.56… This makes for another ill-defined aspect of the “paradox”.
Another remark about this ill-posed nature of the experiment is that, when imagining running an ABC experiment, I could only come with one where the fair coin is thrown (Heads or Tails) and a day (Monday or Tuesday) is chosen at random. Then every proposal (Heads or Tails) is accepted as an awakening, hence the posterior on Heads is the uniform prior. The same would not occurs if we consider the pair of awakenings under Tails as two occurrences of (p,E), but this does not sound (as) correct since Shahrazad only knows of one E: to paraphrase Jeffreys, this is an unobservable result that may have not occurred. (Or in other words, Bayesian learning is not possible on Groundhog Day!)
December 27, 2016 at 3:56 am
Thanks, Radford, I did not know about this paper! First, I love the notion of [not] being “averse to being convinced”. I do agree that the betting argument is sound and should be Shahrazad’s answer if she is operating under this loss function. Actually, rerunning my ABC leads me to reconsider the conclusion above: if I simulate from the prior by tossing the coin, and produce observations in the guise of one (Heads) or two (Tails) awakenings, these observations all agree with Shahrazad’s observation, which does leads to a 1/3 resolution. Second, I associate the endless arguing about such paradoxes with their ambiguous wording, i.e., it is much more a problem of the imprecision of our vernacular than of probability.
December 24, 2016 at 6:25 pm
Why don t you think that her information has changed by being woken up?
I really don t see why her belief about the throw wouldn t be that there is a 1/3 probability of heads. Any slight modification of that scenario agrees with that. But it feels wrong to disagree with you…
December 26, 2016 at 12:30 pm
Feel free to disagree! The paradox is called a paradox because there is no consensus on the answer.
December 26, 2016 at 6:24 pm
There may be no “consensus”, but I think that’s the same as there being no consensus on the Monty Hall problem. I think there’s an unfortunate tendency for philosophical problems to never end, because one side just won’t admit that some problems really do have right answers, and wrong answers. They keep thinking that philosophical positions are always a matter of opinion and they can just keep ignoring the conclusive arguments that they’re wrong…
The right answer is 1/3, as I think I have conclusively shown in a section of my paper at http://www.cs.utoronto.ca/~radford/anth.abstract.html
Of course, I don’t claim to be the only one to have shown this, but the Sailor’s Child example I give is I think particularly convincing (if you’re not averse to being convinced).
January 17, 2017 at 6:41 pm
xi’an:
I believe Radford is completely correct here.
This is a fiction which one can argue has a correct representation (unlike with realities) and hence by deduction one and only one result. If the fiction is vague enough for k correct representations, then k results, if too vague for any representations then no results.
The never endingness I believe comes from folks trying to bring abduction or qualitative inference to bear on the fiction. Those are only appropriate for discerning an adequate representation. Once one has that, its just deduction.
By the way, I believe I arrived at the same position, using if you wish ABC where the outcome observed by Sleeping Beauty is compatible with any outcome observed – no rejections – http://stats.stackexchange.com/revisions/53ededf8-7610-4a99-b110-def80a083737/view-source
Keith O’Rourke
p.s. Happy New Year