Archive for game theory

Le Monde puzzle [#1085]

Posted in Books, Kids, R with tags , , , , , on February 18, 2019 by xi'an

A new Le Monde mathematical puzzle in the digit category:

Given 13 arbitrary relative integers chosen by Bo, Abigail can select any subset of them to be drifted by plus or minus one by Bo, repeatedly until Abigail reaches the largest possible number N of multiples of 5. What is the minimal possible value of N under the assumption that Bo tries to minimise it?

I got stuck on that one, as building a recursive functiion led me nowhere: the potential for infinite loop (add one, subtract one, add one, …) rather than memory issues forced me into a finite horizon for the R function, which then did not return anything substantial in a manageable time. Over the week and the swimming sessions, I thought of simplifying the steps, like (a) work modulo 5, (b) bias moves towards 1 or 4, away from 2 and 3, by keeping only one entry in 2 and 3, and all but one at 1 and 4, but could only produce five 0’s upon a sequence of attempts… With the intuition that only 3 entries should remain in the end, which was comforted by Le Monde solution the week after.

the incomprehensible challenge of poker

Posted in Statistics with tags , , , , , , , , on April 6, 2017 by xi'an

When reading in Nature about two deep learning algorithms winning at a version of poker within a few weeks of difference, I came back to my “usual” wonder about poker, as I cannot understand it as a game. (Although I can see the point, albeit dubious, in playing to win money.) And [definitely] correlatively do not understand the difficulty in building an AI that plays the game. [I know, I know nothing!]

round-table on Bayes[ian[ism]]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , on March 7, 2017 by xi'an

In a [sort of] coincidence, shortly after writing my review on Le bayésianisme aujourd’hui, I got invited by the book editor, Isabelle Drouet, to take part in a round-table on Bayesianism in La Sorbonne. Which constituted the first seminar in the monthly series of the séminaire “Probabilités, Décision, Incertitude”. Invitation that I accepted and honoured by taking place in this public debate (if not dispute) on all [or most] things Bayes. Along with Paul Egré (CNRS, Institut Jean Nicod) and Pascal Pernot (CNRS, Laboratoire de chimie physique). And without a neuroscientist, who could not or would not attend.

While nothing earthshaking came out of the seminar, and certainly not from me!, it was interesting to hear of the perspectives of my philosophy+psychology and chemistry colleagues, the former explaining his path from classical to Bayesian testing—while mentioning trying to read the book Statistical rethinking reviewed a few months ago—and the later the difficulty to teach both colleagues and students the need for an assessment of uncertainty in measurements. And alluding to GUM, developed by the Bureau International des Poids et Mesures I visited last year. I tried to present my relativity viewpoints on the [relative] nature of the prior, to avoid the usual morass of debates on the nature and subjectivity of the prior, tried to explain Bayesian posteriors via ABC, mentioned examples from The Theorem that Would not Die, yet untranslated into French, and expressed reserves about the glorious future of Bayesian statistics as we know it. This seminar was fairly enjoyable, with none of the stress induced by the constraints of a radio-show. Just too bad it did not attract a wider audience!

le bayésianisme aujourd’hui [book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , on March 4, 2017 by xi'an

It is quite rare to see a book published in French about Bayesian statistics and even rarer to find one that connects philosophy of science, foundations of probability, statistics, and applications in neurosciences and artificial intelligence. Le bayésianisme aujourd’hui (Bayesianism today) was edited by Isabelle Drouet, a Reader in Philosophy at La Sorbonne. And includes a chapter of mine on the basics of Bayesian inference (à la Bayesian Choice), written in French like the rest of the book.

The title of the book is rather surprising (to me) as I had never heard the term Bayesianism mentioned before. As shown by this link, the term apparently exists. (Even though I dislike the sound of it!) The notion is one of a probabilistic structure of knowledge and learning, à la Poincaré. As described in the beginning of the book. But I fear the arguments minimising the subjectivity of the Bayesian approach should not be advanced, following my new stance on the relativity of probabilistic statements, if only because they are defensive and open the path all too easily to counterarguments. Similarly, the argument according to which the “Big Data” era makesp the impact of the prior negligible and paradoxically justifies the use of Bayesian methods is limited to the case of little Big Data, i.e., when the observations are more or less iid with a limited number of parameters. Not when the number of parameters explodes. Another set of arguments that I find both more modern and compelling [for being modern is not necessarily a plus!] is the ease with which the Bayesian framework allows for integrative and cooperative learning. Along with its ultimate modularity, since each component of the learning mechanism can be extracted and replaced with an alternative. Continue reading

Le bayésianisme aujourd’hui

Posted in Books, Statistics with tags , , , , , , , on September 19, 2016 by xi'an

A few years ago, I was asked by Isabelle Drouet to contribute a chapter to a multi-disciplinary book on the Bayesian paradigm, book that is now soon to appear. In French. It has this rather ugly title of Bayesianism today. Not that I had hear of Bayesianism or bayésianime previously. There are chapters on the Bayesian notion(s) of probability, game theory, statistics, on applications, and on the (potentially) Bayesian structure of human intelligence. Most of it is thus outside statistics, but I will certainly read through it when I receive my copy.

another riddle

Posted in Books, Kids with tags , , , on June 22, 2016 by xi'an

A riddle on The Riddler that pertains to game theory [and does not truly require coding]:

Ten pirates have ten gold pieces to divide and each have a unique rank, from the captain on down. The captain puts forth the first plan to divide up the gold, whereupon everyone votes. If at least half the pirates vote for the plan, it is enacted, and the gold is distributed accordingly. If the plan gets fewer than half the votes, however, the captain is killed, the second-in-command is promoted, and the process starts over.

Pirates always vote by the following rules, with the earliest rule taking precedence in a conflict:

  1. Self-preservation: A pirate values his life above all else.
  2. Greed: A pirate seeks as much gold as possible.
  3. Bloodthirst: Failing a threat to his life or bounty, a pirate always votes to kill.

Under this system, how do the pirate divide up their gold?

An obviously final configuration is when there are only two pirates left since whatever division the first pirate proposes, including the one where he gets all the coins, the second (and last) one cannot oppose. Hence the last pirate must avoid reaching a two person configuration. On the opposite the one-to-last pirate should aim at this solution, but he is the only one! This means that for any configuration but one the last and second to last pirates will outvote the one to last. Meaning that the optimum for the second to one pirate is a partition of (9,0,1). Moving to the general case, when there remains 2p crew members, the pirate in command must secure a positive vote from  (p-1) other pirates below him, which means those (p-1) pirates must get one coin more than what they would get on the next round. This amounts to picking the (p-1) pirates with no reward on the next round and giving them one coin each, leading to a reward of 11-p for the current leader. When there remains 2p-1 crew members, the optimal solution is identical, namely to award one coin each for the (p-1) pirates with no reward on the next round. With a total of 10 pirates, the optimal configuration for the captain is to separate the 10 coins into (6,0,1,0,1,0,1,0,1,0).

optimality stands in the eye of the riddler

Posted in Books, Kids, pictures, Statistics with tags , , , , , , on March 22, 2016 by xi'an

When looking at US elections on FiveThirtyEight, I came across this riddle:

Two players go (…) into separate booths (…) and a random number between zero and one appears on a screen (…) chosen from a standard uniform distribution. They can choose to keep that first number, or to (…) get a second random number, which they must keep (…) The prize (…) is awarded to the player who kept the higher number (…) Which number is the optimal cutoff for players to discard their first number and choose another? [The Riddler, Mar 4, 2016]

While the solution is now available, I wonder at the wording of this riddle, where “optimality” is not spelled out. Unless I missed some key entry, as it often happens with puzzles… Assuming both players use the same “optimal” cut-off C to make their decision to keep or drop the original uniform, the probability that one does better than the other is exactly ½ since both outcomes are iid. When considering the expected value of the number kept by a player, a simple integral shows that this value is

½(1-C²+C),

which is maximal for C=½. If one considers instead the median of the number Z kept by a player, a bit more computation leads to the function med(Z) = 1/2C when C²>1/2 and (½+C)/(1+C) when C²<1/2, which is maximal for C²=½.

“…using the golden ratio gives the best chance to win the gold bullion!” [The Riddler, Mar 4, 2016]

Incidentally (or not), the solution on 538 is also difficult to understand in that it starts with the event that the first draw is C, which is an event of probability zero. However, when trying to optimise the choice of C for one player, given that the other player has a known cuttoff of D, I found that the only case when C=D coincides with the solution proposed on Riddler, namely ½(√5-1). To check whether or not this derivation was correct, I also plotted the theoretical (right) versus the empirical (left) versions of the winning probability:

riddlerThere is no difference between the two. But the diagonal is exactly at the .5 level, as expected:riddlerinthestormwith an interesting second curve at the .5 probability level. These two level sets happen to meet at the “golden number” solution, ½(√5-1), which comes as a confirmation of my earlier remark. [Another question connected with this riddle would be to figure out the value of D used by the other player from a sequence of games.]