Archive for February, 2013

reading classics (#10 and #10bis)

Posted in Books, Statistics, University life with tags , , , , , , , , , on February 28, 2013 by xi'an

Today’s classics seminar was rather special as two students were scheduled to talk. It was even more special as both students had picked (without informing me) the very same article by Berger and Sellke (1987), Testing a point-null hypothesis: the irreconcilability of p-values and evidence, on the (deep?) discrepancies between frequentist p-values and Bayesian posterior probabilities. In connection with the Lindley-Jeffreys paradox. Here are Amira Mziou’s slides:

and Jiahuan Li’s slides:

for comparison.

It was a good exercise to listen to both talks, seeing two perspectives on the same paper, and I hope the students in the class got the idea(s) behind the paper. As you can see, there were obviously repetitions between the talks, including the presentation of the lower bounds for all classes considered by Jim Berger and Tom Sellke, and the overall motivation for the comparison. Maybe as a consequence of my criticisms on the previous talk, both Amira and Jiahuan put some stress on the definitions to formally define the background of the paper. (I love the poetic line: “To prevent having a non-Bayesian reality”, although I am not sure what Amira meant by this…)

I like the connection made therein with the Lindley-Jeffreys paradox since this is the core idea behind the paper. And because I am currently writing a note about the paradox. Obviously, it was hard for the students to take a more remote stand on the reason for the comparison, from questioning .the relevance of testing point null hypotheses and of comparing the numerical values of a p-value with a posterior probability, to expecting asymptotic agreement between a p-value and a Bayes factor when both are convergent quantities, to setting the same weight on both hypotheses, to the ad-hocquery of using a drift on one to equate the p-value with the Bayes factor, to use specific priors like Jeffreys’s (which has the nice feature that it corresponds to g=n in the g-prior,  as discussed in the new edition of Bayesian Core). The students also failed to remark on the fact that the developments were only for real parameters, as the phenomenon (that the lower bound on the posterior probabilities is larger than the p-value) does not happen so universally in larger dimensions.  I would have expected more discussion from the ground, but we still got good questions and comments on a) why 0.05 matters and b) why comparing  p-values and posterior probabilities is relevant. The next paper to be discussed will be Tukey’s piece on the future of statistics.

forgotten snapshot from Bristol

Posted in Statistics with tags , , , , on February 27, 2013 by xi'an

house in Clifton, Bristol, Sept. 28, 2012

the signal and the noise

Posted in Books, Statistics with tags , , , , , , , , , , , on February 27, 2013 by xi'an

It took me a while to get Nate Silver’s the signal and the noise: why so many predictions fall – but some don’t (hereafter s&n) and another while to read it (blame A Memory of Light!).

“Bayes and Price are telling Hume, don’t blame nature because you are too daft to understand it.” s&n, p.242

I find s&n highly interesting and it is rather refreshing to see the Bayesian approach so passionately promoted by a former poker player, as betting and Dutch book arguments have often been used as argument in favour of this approach. While it works well for some illustrations in the book, like poker and the stock market, as well as political polls and sports, I prefer more decision theoretic motivations for topics like weather prediction, sudden epidemics, global warming or terrorism. Of course, this passionate aspect makes s&n open to criticisms, like this one by Marcus and Davies in The New Yorker about seeing everything through the Bayesian lenses. The chapter on Bayes and Bayes’ theorem (Chapter 8) is a wee caricaturesque in this regard. Indeed, Silver sees too much in Bayes’ Essay, to the point of mistakenly attributing to Bayes a discussion of Hume’s sunrise problem. (The only remark is made in the Appendix, which was written by Price—like possibly the whole of the Essay!—, and  P.S. Laplace is the one who applied Bayesian reasoning to the problem, leading to Laplace’s succession rule.) The criticisms of frequentism are also slightly over-the-levee: they are mostly directed at inadequate models that a Bayesian analysis would similarly process in the wrong way. (Some critics argue on the opposite that Bayesian analysis is too much dependent on the model being “right”! Or on the availability of a fully-specified  model.) Seeing frequentism as restricted to “collecting data among just a sample of the population rather than the whole population” (p.252) is certainly not presenting a broad coverage of frequentism.

“Prediction serves a very central role in hypothesis testing, for instance, and therefore in all of science.” s&n, p.230

The book is written in a fairly enjoyable style, highly personal (no harm with that) and apart from superlativising (!) everyone making a relevant appearance—which seems the highest common denominator of all those pop’sci’ books I end up reviewing so very often!, maybe this is something like Rule #1 in Scientific Writing 101 courses: “makes the scientists sound real, turn’em into real people”—, I find it rather well-organised as it brings the reader from facts (prediction usually does poorly) to the possibility of higher quality prediction (by acknowledging prior information, accepting uncertainty, using all items of information available, further accepting uncertainty, &tc.). I am not sure the reader is the wiser by the end of the book on how one should improve one’s prediction tools, but there is a least a warning about the low quality of most predictions and predictive tools that should linger in the reader’s ears…. I enjoyed very much the chapter on chess, esp. the core about Kasparov’s misreading the computer reasons for a poor move (no further spoiler!), although I felt it was not much connected to the rest of the book.

In his review, Larry Wasserman argues that the defence Silver makes of his procedure is more frequentist than Bayesian. Because he uses calibration and long-term performances. Well… Having good calibration properties does not mean the procedure is not Bayesian or frequentist, simply that it is making efficient use of the available information. Anyway, I agree (!) with Larry on the point that Silver somehow “confuses “Bayesian inference” with “using Bayes’ theorem”. Or puts too much meaning in the use of Bayes’ theorem, not unlike the editors of Science & Vie a few months ago. To push Larry’s controversial statement a wee further, I would even wonder whether the book has anything to do about inference. Indeed, in the end, I find s&n rather uninformative about statistical modelling and even more (or less!) about model checking. The only “statistical” model that is truly discussed over the book is the power law distribution, applied to earthquakes and terrorist attack fatalities. This is not an helpful model in that (a) it does not explain anything, as it does not make use of covariates or side information, and (b) it has no predictive power, especially in the tails.  On the first point, concluding that Israel’s approach to counter-terrorism is successful because it “is the only country that has been able to bend” the power-law curve (p.442) sounds rather hasty. I’d like to see the same picture for Iraq, say. Actually, I found one in this arXiv paper. And it looks about the same for Afghanistan (Fig.4). On the second point, the modelling is poor in handling extreme values (which are the ones of interest in both cases) and cannot face change-points or lacks of stationary, an issue not sufficiently covered in s&n in my opinion. The difficulty with modelling volatile concepts like the stock market, the next presidential election or the move of your poker opponents is that there is no physical, immutable, law at play. Things can change from one instant to the next. Unpredictably. Esp. in the tails.

There are plenty of graphs in s&n, which is great, but not all of them are at the Tufte quality level. For instance, Figure 11-1 about the “average time U.S. common stock was held” contains six pie charts corresponding to six decades with the average time and a percentage which could be how long compared with the 1950s a stock was held. The graph is not mentioned in the text. (I will not mention Figure 8-2!) I also spotted a minuscule typo (`probabalistic’) on Figure 10-2A.

Maybe one last and highly personal remark about the chapter on poker (feel free to skip!): while I am a very poor card player, I do not mind playing cards (and loosing) with my kids. However, I simply do not understand the rationale of playing poker. If there is no money at stake, the game does not seem to make sense since every player can keep bluffing until the end of time. And if there is money at stake, I find the whole notion unethical. This is a zero sum game, so money comes from someone else’s pocket (or more likely someone else’s retirement plan or someone else’s kids college savings plan). Not much difference with the way the stock market behaves nowadays… (Incidentally, this chapter did not discuss at all the performances of computer poker programs, unexpectedly, as the number of possibilities is very small and they should thus be fairly efficient.)

one of those weird LaTeX incidents…

Posted in Books, University life with tags , , , on February 26, 2013 by xi'an

This afternoon, I had to recompile a LαTεX file written two years ago (in French) and it returned the error:

 Missing \endcsname inserted.
<to be read again>
                   \unskip
l.182 \citeyear{berger:robert:1990}
                                    avec James Berger dans un article aux {\...

and I couldn’t figure out what had changed into the compiler to produce this strange message… Except that removing the line

\usepackage[french]{babel}

was removing the error message (but producing garbage French!).

I thus checked on forums and found some solution on a French LαTεX users forum called MathemaTeX. The issue had to do with using `:’ in my reference codes… However, after fiddling with this solution, I ended up with

\usepackage[french,english]{babel}

as enough to remove the error message and produce the same output as earlier. Weird! (As I always use the same bibTeX file and do use babel-french from time to time… Maybe not that often with citations, actually!)

rise of the B word

Posted in Statistics with tags , , , on February 26, 2013 by xi'an

comparison of the uses of the words Bayesian, maximum likelihood, and frequentist, using Google NgramWhile preparing a book chapter, I checked on Google Ngram viewer the comparative uses of the words Bayesian (blue), maximum likelihood (red) and frequentist (yellow), producing the above (screen-copy quality, I am afraid!). It shows an increase of the use of the B word from the early 80′s and not the sudden rise in the 90′s I was expecting. The inclusion of “frequentist” is definitely in the joking mode, as this is not a qualification used by frequentists to describe their methods. In other words (!), “frequentist” does not occur very often in frequentist papers (and not as often as in Bayesian papers!)…

Follow

Get every new post delivered to your Inbox.

Join 557 other followers