## straightforward statistics [book review]

“I took two different statistics courses as an undergraduate psychology major [and] four different advanced statistics classes as a PhD student.”G. Geher

*Straightforward Statistics: Understanding the Tools of Research* by Glenn Geher and Sara Hall is an introductory textbook for psychology and other social science students. (That Oxford University Press sent me for review in CHANCE. Nice cover, by the way!) I can spot the purpose behind the title, purpose heavily stressed anew in the preface and the first chapter, but it nonetheless irks me as conveying the message that one semester of reasonable diligence in class will suffice to any college students to *“not only understanding research findings from psychology, but also to uncovering new truths about the world and our place in it”* (p.9). Nothing less. While, in essence, it covers the basics found in all introductory textbooks, from descriptive statistics to ANOVA models. The inclusion of “real research examples” in the chapters of the book rather demonstrates how far from real research a reader of the book would stand…

“However, as often happen in life, we can be wrong…” (p.66)

**T**he book aims at teaching basic statistics to “undergraduate students who are afraid of math” (p.xiii). By using “an accessible, accurate, coherent, and engaging presentation of statistics” (p.xiv). And reducing the maths expressions to a bare minimum. Unfortunately the very first formula (p.19) is meaningless (skipping the individual indices in sums is the rule throughout the book)

and the second one (Table 2.7, p.22 and again Tables 2.19 and 2.20, p.43)

is (a) missing both the indices and the summation symbol and (b) dividing the sum of “squared deviation scores” by N rather than the customary N-1. I also fail to see the point of providing histograms for categorical variables with only two modalities, like “Hungry” and “Not Hungry” (Fig. 2.11, p.47)…

“Statisticians never prove anything-thereby making prove something of a dirty word.” (p.116)

**A**s I only teach math students, I cannot judge how adequate the textbook is for psychology or other social science students. It however sounds highly verbose to me, in its attempts to bypass maths formulas. For instance, the 15 pages of the chapter on standardised scores are about moving back and forth between the raw data and its standardised version

meaning

Or the two pages (pp.71-72) of motivations on the r coefficient before the (again meaningless) formula

which even skips indices of the z-scores to avoid frightening the students. (The book also asserts that a correllation of zero “corresponds to no mathematical relationship between [the] two variables whatsoever”, p.70.) Or yet the formula for (raw-score) regression (p.97) given as

where and

without defining B. Which is apparently a typo as the standardised regression used β… I could keep going with such examples but the point I want to make is that, if the authors want to reach students that have fundamental problems with a formula like

which does not appear in the book, they could expose them to the analysis and understanding of the outcome of statistical software rather than spending a large part of the book on computing elementary quantities like the coefficients of a simple linear regression by hand. Instead, fundamental notions like multivariate regression is relegated to an appendix (E) as “Advanced statistics to be aware of”. Plus a two page discussion (pp.104-105) of a study conducted by the first author on predicting preference for vaginal sex. (To put things into context, the first author also wrote *Mating Intelligence Unleashed*. Which explains for some unusual “real research examples”.)

**A**nother illustration of what I consider as the wrong focus of the book is provided by the introduction to (probability and) the normal distribution in Chapter 6, which dedicates most of the pages to reading the area under the normal density from a normal table without even providing the formula of this density. (And with an interesting typo in Figure 6.4.) Indeed, as in last century textbooks, the book does include probability tables for standard tests. Rather than relying on software and pocket calculators to move on to the probabilistic interpretation of p-values. And the multi-layered caution that is necessary when handling hypotheses labelled as *significant*. (A caution pushed to its paroxysm in *The Cult of Significance* I reviewed a while ago.) The book includes a chapter on power but, besides handling coordinate axes in a weird manner (check from Fig. 9.5 onwards) and repeating everything twice for left- and right-one-sided hypotheses!, it makes the computation of power appear like the main difficulty when it is its interpretation that is most delicate and fraught with danger. Were I to teach (classical) testing to math-adverse undergrads, and I may actually have to next year!, I would skip the technicalities and pile up cases and counter-cases explaining why p-values and power are not the end of the analysis. (Using Andrew’s blog as a good reservoir for such cases, as illustrated by his talk in Chamonix last January!) But I did not see any warning in that book on the dangers of manipulating data, formulating hypotheses to test out of the data, running multiple tests with no correction and so on.

**T**o conclude on this extensive review, I, as an outsider, fail to see redeeming features that would single *Straightforward Statistics: Understanding the Tools of Research* as a particularly enticing textbook. The authors have clearly put a lot of efforts into their book, adopted what they think is the most appropriate tone to reach to the students, and added very detailed homeworks and their solution. Still, this view makes statistics sounds too straightforward and leads to the far too common apprehension of p-values as the ultimate assessment for statistical significance, without opening for alternatives such as outliers and model misspecification.

[Warning: this review has been published in a slightly edited version in CHANCE, Nov. 2014.]

July 3, 2014 at 5:30 pm

Your nitpicking of the notational imprecision reminded me of Jayne’s book (Probability Theory: The Logic of Science), p. 675:

“Obviously, mathematical results cannot be communicated without some decent standards of precision in our statements. But a fanatical insistence on one particular form of precision and generality can be carried so far that it defeats its own purpose; 20th century mathematics often degenerats into an idle adversary game instead of a communication process.

“The fanatic is not trying to understand your substantive message at all, but only trying to find fault with your style of presentation. He will strive to read nonsense into what you are saying, if he can possibly find any way of doing so….”

[He has much more to say.]

In my Sheffield statistics courses I also experienced this; if I said

X ~ N(\mu,\sigma^2), I lost substantial numbers of points if I didn’t first say that X is a random variable, and that \mu is the mean and \sigma is the standard deviation of a normal distribution. If I made the same “mistake” four times in a homework, I lost the same number of points each time! I could never understand what was going on until I read the Jaynes book and the comment in it.

I *have* learnt my lesson though; now, whenever I do an assignment, I will define every single thing to the point of total and utter exhaustion :). It doesn’t make me a better statistician, but at least it doesn’t raise the ire of my profs at Sheffield. ;)

July 4, 2014 at 11:17 am

This is true of many books I reviewed here: shying away from a few mathematical formulas led the authors to confusing, verbose and sometimes wrong explanations that do a disservice to the readers, the book and the topic. In the current case, I cannot understand some of the formulas such as the one in the post and wonder why using indexes like x

is so horrible…_{i}July 4, 2014 at 12:59 pm

There are students in areas like linguistics and psychology to whom a subscript (or two) will cause a total cognitive shutdown. Many are not used to putting in hard labor (by unpacking stuff) to understand just a single line in a book. It’s a different culture from what you’re used to.

July 3, 2014 at 5:12 pm

“they could expose them to the analysis and understanding of the outcome of statistical software rather than spending a large part of the book on computing elementary quantities like the coefficients of a simple linear regression by hand”

Well, it’s because some people do exactly that that we end up in the situation where, as Andrew put it, it becomes possible to fit models without having any idea about what you’re doing. This strategy is possibly the single most harmful approach that has been used to teach statistics to the lay person.

“I would skip the technicalities and pile up cases and counter-cases explaining why p-values and power are not the end of the analysis”

You would be right to do that, but your students will learn nothing and leave your class unprepared to do their own research. What are they supposed to with journal editors, who control the discourse, for whom a p-value is the end of the story?

Also, it’s useless to show them negative examples without telling them what to do instead, but if you go that route, you would have to explain the why as well. Otherwise, they’ll just keep going with more and more blind application of half-understood “rules”.

July 4, 2014 at 11:14 am

Thanks, Shravan: when I get back to Paris, I will send you the book so that you can write a contradictory review!

I see your point, which is well-taken in an ideal world, but however still object to the message that students can make their statistical analyses and draw their informed conclusions with a one-semester class in which they are essentially doing all calculations by hand. In the perspective of limited resource allocation, a semester should teach them how to critically assess others’ analyses, since assuming more is unrealistic or deceptive.

July 4, 2014 at 12:51 pm

I agree that taking only one class can never achieve anything much (that’s how I started my journey with data analysis, in 2000, in a linguistics program, and it didn’t work out well for me either).

Hold the book; I’m going to be in Paris all of October. We can exchange the book in person ;)

July 10, 2014 at 3:04 pm

> how to critically assess others’ analyses

Interesting idea – critically assessing the design and write up of published papers went over really well with a class of Rehab Medicine students I taught years ago. I gave them an early version of a study quality appraisal checklist (precursor to http://www.consort-statement.org/) to use on journal papers of their choice. They invariably chose papers by their profs and the highest score out of 10 was a 4! (Perhaps why I was replaced the next year.)

Analyses might be much harder as there is not as much critical common sense involved (for instance, one year, the Msc students at Oxford Dept of Stats complained that it was unfair to ask this of them so early in their studies.) One would need to figure out how to enable criticism while the concepts are being introduced.

If it is done, hopefully the students will pick stats prof’s work to critically assess.

(I was disappointed with overly rosy picture of (every?) statistician’s abilities painted in http://www.worldofstatistics.org/wos/pdfs/Statistics&Science-TheLondonWorkshopReport.pdf)

For instance, from there the students could try to critically assess this comment regarding both randomised and non-randomised published studies.

“They [Jager and Leek] reasoned that for the false hypotheses, the p-value should simply be a random number between 0 and 1. On the other hand, for the true hypotheses, they argued that the p-values should follow a distribution skewed toward zero, called a beta-distribution.”