**I**t came as a surprise to me that the book reviewed in the book review section of Nature of 25 June was a personal account of a professional poker player, The Biggest Bluff by Maria Konnikova. (Surprise enough to write a blog entry!) As I see very little scientific impetus in studying the psychology of poker players and the associated decision making. Obviously, *this is not a book review*, but a review of the book review. (Although the NYT published a rather extensive extract of the book, from which I cannot detect anything deep from a game-theory viewpoint. Apart from the maybe-not-so-deep message that psychology matters a lot in poker…) Which does not bring much incentive for those uninterested (or worse) in money games like poker. Even when “a heap of Bayesian model-building [is] thrown in”, as the review mixes randomness and luck, while seeing the book as teaching the reader “how to play the game of life”, a type of self-improvement vending line one hardly expects to read in a scientific journal. (But again I have never understood the point in playing poker…)

## Archive for poker

## the biggest bluff [not a book review]

Posted in Books with tags book review, decision theory, Nature, CHANCE, poker, NYT, psychology, data, betting, game theory, game, not a book review on August 14, 2020 by xi'an## 9 pitfalls of data science [book review]

Posted in Books, Kids, Statistics, Travel, University life with tags Austria, book review, CHANCE, Germany, jiu-jitsu, lotus, OUP, Oxford University Press, poker, Salzburg, The Book of Why, Theranos, train travel, USA on September 11, 2019 by xi'an**I** received The 9 pitfalls of data science by Gary Smith [who has written a significant number of general public books on personal investment, statistics and AIs] and Jay Cordes from OUP for review a few weeks ago and read it on my trip to Salzburg. This short book contains a lot of anecdotes and what I would qualify of small talk on job experiences and colleagues’ idiosyncrasies…. More fundamentally, it reads as a sequence of examples of bad or misused statistics, as many general public books on statistics do, but with little to say on how to spot such misuses of statistics. Its title (It seems like *the 9 pitfalls of…* is a rather common début for a book title!) however started a (short) conversation with my neighbour on the train to Salzburg as she wanted to know if the job opportunities in data sciences were better in Germany than in Austria. A practically important question for which I had no clue. And I do not think the book would have helped either! (My neighbour in the earlier plane to München had a book on growing lotus, which was not particularly enticing for launching a conversation either.)

Chapter I “*Using bad data*” is made of examples of truncated or cherry picked data often associated with poor graphics. Only one dimensional outcome and also very US centric. Chapter II “*Data before theory*” highlights spurious correlations and post hoc predictions, criticism of data mining, some examples being quite standard. Chapter III “*Worshiping maths*” sounds like the perfect opposite of the previous cahpter: it discusses the fact that all models are wrong but some may be more wrong than others. And gives examples of over fitting, p-value hacking, regression applied to longitudinal data. With the message that (maths) assumptions are handy and helpful but not always realistic. Chapter IV “*Worshiping computers*” is about the new golden calf and contains rather standard stuff on trusting the computer output because it is a machine. However, the book is somewhat falling foul of the same mistake by trusting a Monte Carlo simulation of a shortfall probability for retirees since Monte Carlo also depends on a model! Computer simulations may be fine for Bingo night or poker tournaments but much more uncertain for complex decisions like retirement investments. It is also missing the biasing aspects in constructing recidivism prediction models pointed out in Weapons of math destruction. Until Chapter 9 at least. The chapter is also mentioning adversarial attacks if not GANs (!). Chapter V “*Torturing data*” mentions famous cheaters like Wansink of the bottomless bowl and pizza papers and contains more about p-hacking and reproducibility. Chapter VI “*Fooling yourself*” is a rather weak chapter in my opinion. Apart from Ioannidis take on Theranos’ lack of scientific backing, it spends quite a lot of space on stories about poker gains in the unregulated era of online poker, with boasts of significant gains that are possibly earned from compulsive gamblers playing their family savings, which is not particularly praiseworthy. And about Brazilian jiu-jitsu. Chapter VII “*Correlation vs causation*” predictably mentions Judea Pearl (whose book of why I just could not finish after reading one rant too many about statisticians being unable to get causality right! Especially after discussing the book with Andrew.). But not so much to gather from the chapter, which could have instead delved into deep learning and its ways to avoid overfitting. The first example of this chapter is more about confusing conditionals (what is conditional on what?) than turning causation around. Chapter VII “*Regression to the mean*” sees Galton’s quincunx reappearing here after Pearl’s book where I learned (and checked with Steve Stiegler) that the device was indeed intended for that purpose of illustrating regression to the mean. While the attractive fallacy is worth pointing out there are much worse abuses of regression that could be presented. CHANCE’s Howard Wainer also makes an appearance along SAT scores. Chapter IX “*Doing harm*” does engage into the issue that predicting social features like recidivism by a (black box) software is highly worrying (and just plain wrong) if only because of this black box nature. Moving predictably to chess and go with the right comment that this does not say much about real data problems. A word of warning about DNA testing containing very little about ancestry, if only because of the company limited and biased database. With further calls for data privacy and a rather useless entry on North Korea. Chapter X “*The Great Recession*“, which discusses the subprime scandal (as in Stewart’s book), contains a set of (mostly superfluous) equations from Samuelson’s paper (supposed to scare or impress the reader?!) leading to the rather obvious result that the expected concave utility of a weighted average of iid positive rvs is maximal when all the weights are equal, result that is criticised by laughing at the assumption of iid-ness in the case of mortgages. Along with those who bought exotic derivatives whose construction they could not understand. The (short) chapter keeps going through all the (a posteriori) obvious ingredients for a financial disaster to link them to most of the nine pitfalls. Except the second about data before theory, because there was no data, only theory with no connection with reality. This final chapter is rather enjoyable, if coming after the facts. And containing this altogether unnecessary mathematical entry. *[Usual warning: this review or a revised version of it is likely to appear in CHANCE, in my book reviews column.]*

## the incomprehensible challenge of poker

Posted in Statistics with tags /Pages/SIMAccueil.aspx, artificial intelligence, bills, deep learning, Denmark, game theory, Nature, poker, statistics and sports on April 6, 2017 by xi'an**W**hen reading in Nature about two deep learning algorithms winning at a version of poker within a few weeks of difference, I came back to my “usual” wonder about poker, as I cannot understand it as a game. (Although I can see the point, albeit dubious, in playing to win money.) And [definitely] correlatively do not understand the difficulty in building an AI that plays the game. [I know, I know nothing!]

## the signal and the noise

Posted in Books, Statistics with tags Bayesian data analysis, book reviews, chess, David Hume, Deep Blue, earthquake, Kasparov, Nate Silver, poker, predictions, The Signal and The Noise, weather prediction on February 27, 2013 by xi'an**I**t took me a while to get Nate Silver’s ** the signal and the noise: why so many predictions fall – but some don’t** (hereafter s&n) and another while to read it (blame

*A Memory of Light*!).

“Bayes and Price are telling Hume, don’t blame nature because you are too daft to understand it.”s&n, p.242

**I** find s&n highly interesting and it is rather refreshing to see the Bayesian approach so passionately promoted by a former poker player, as betting and Dutch book arguments have often been used as argument in favour of this approach. While it works well for some illustrations in the book, like poker and the stock market, as well as political polls and sports, I prefer more decision theoretic motivations for topics like weather prediction, sudden epidemics, global warming or terrorism. Of course, this passionate aspect makes s&n open to criticisms, like this one by Marcus and Davies in *The New Yorker* about seeing everything through the Bayesian lenses. The chapter on Bayes and Bayes’ theorem (Chapter 8) is a wee caricaturesque in this regard. Indeed, Silver sees too much in Bayes’ *Essay*, to the point of mistakenly attributing to Bayes a discussion of Hume’s sunrise problem. (The only remark is made in the Appendix, which was written by Price—like possibly the whole of the *Essay*!—, and P.S. Laplace is the one who applied Bayesian reasoning to the problem, leading to Laplace’s succession rule.) The criticisms of frequentism are also slightly over-the-levee: they are mostly directed at inadequate models that a Bayesian analysis would similarly process in the wrong way. (Some critics argue on the opposite that Bayesian analysis is too much dependent on the model being “right”! Or on the availability of a fully-specified model.) Seeing frequentism as restricted to “collecting data among just a sample of the population rather than the whole population” (p.252) is certainly not presenting a broad coverage of frequentism.

“Prediction serves a very central role in hypothesis testing, for instance, and therefore in all of science.”s&n, p.230

**T**he book is written in a fairly enjoyable style, highly personal *(no harm with that)* and apart from superlativising (!) everyone making a relevant appearance—which seems the highest common denominator of all those pop’sci’ books I end up reviewing so very often!, maybe this is something like * Rule #1* in

*Scientific Writing 101*courses:

*“makes the scientists sound real, turn’em into real people”*—, I find it rather well-organised as it brings the reader from facts (prediction usually does poorly) to the possibility of higher quality prediction (by acknowledging prior information, accepting uncertainty, using all items of information available, further accepting uncertainty, &tc.). I am not sure the reader is the wiser by the end of the book on how one should improve one’s prediction tools, but there is a least a warning about the low quality of most predictions and predictive tools that should linger in the reader’s ears…. I enjoyed very much the chapter on chess, esp. the core about Kasparov’s misreading the computer reasons for a poor move (no further spoiler!), although I felt it was not much connected to the rest of the book.

**I**n his review, Larry Wasserman argues that the defence Silver makes of his procedure is more frequentist than Bayesian. Because he uses calibration and long-term performances. Well… Having good calibration properties does not mean the procedure is not Bayesian or frequentist, simply that it is making efficient use of the available information. Anyway, I agree (!) with Larry on the point that Silver somehow “confuses “Bayesian inference” with “using Bayes’ theorem”. Or puts too much meaning in the use of Bayes’ theorem, not unlike the editors of Science & Vie a few months ago. To push Larry’s controversial statement a wee further, I would even wonder whether the book has anything to do about inference. Indeed, in the end, I find s&n rather uninformative about statistical modelling and even more (or less!) about model checking. The only “statistical” model that is truly discussed over the book is the power law distribution, applied to earthquakes and terrorist attack fatalities. This is not an helpful model in that (a) it does not explain anything, as it does not make use of covariates or side information, and (b) it has no predictive power, especially in the tails. On the first point, concluding that Israel’s approach to counter-terrorism is successful because it “is the only country that has been able to bend” the power-law curve (p.442) sounds rather hasty. I’d like to see the same picture for Iraq, say. Actually, I found one in this arXiv paper. And it looks about the same for Afghanistan (Fig.4). On the second point, the modelling is poor in handling extreme values (which are the ones of interest in both cases) and cannot face change-points or lacks of stationary, an issue not sufficiently covered in s&n in my opinion. The difficulty with modelling volatile concepts like the stock market, the next presidential election or the move of your poker opponents is that there is no physical, immutable, law at play. Things can change from one instant to the next. Unpredictably. Esp. in the tails.

**T**here are plenty of graphs in s&n, which is great, but not all of them are at the Tufte quality level. For instance, Figure 11-1 about the “average time U.S. common stock was held” contains six pie charts corresponding to six decades with the average time and a percentage which could be how long compared with the 1950s a stock was held. The graph is not mentioned in the text. (I will not mention Figure 8-2!) I also spotted a minuscule typo (`probabalistic’) on Figure 10-2A.

**M**aybe one last and highly personal remark about the chapter on poker *(feel free to skip!)*: while I am a very poor card player, I do not mind playing cards (and loosing) with my kids. However, I simply do not understand the rationale of playing poker. If there is no money at stake, the game does not seem to make sense since every player can keep bluffing until the end of time. And if there is money at stake, I find the whole notion unethical. This is a zero sum game, so money comes from someone else’s pocket (or more likely someone else’s retirement plan or someone else’s kids college savings plan). Not much difference with the way the stock market behaves nowadays… (Incidentally, this chapter did not discuss at all the performances of computer poker programs, unexpectedly, as the number of possibilities is very small and they should thus be fairly efficient.)

## books for review (in CHANCE)

Posted in Books, R, Statistics, University life with tags biostatistics, book reviews, CHANCE, poker, R, Texas on September 14, 2012 by xi'an**A**mong the books I received for review in CHANCE, here are some neither I nor my “usual suspects” had enough time or interest in to review:

*R Graphics (second edition)*by Paul Murrell*Biostatistics: A computing approach*by Stewart Anderson*Advanced Bayesian methods for medical test accuracy*by Lyle Broemeling*Introduction to Probability with Texas hold’em examples*by Frederic Paik Schoenberg*X and the city*by John Adam*Introduction to the Theory of Statistical Inference*by Hannelore Liero and Silvelyn Zwanzig

**I**f you would like to review one of those books, send me an email.with some reference/bio and your mailing address. Be warned though that I will decide on a completely arbitrary way (a) on the chosen reviewers and (b) whether or not to publish a proposed review! (The reviewer keeps the book, as a rule.)