Archive for CRC Press

Model-Based Clustering, Classification, and Density Estimation Using mclust in R [not a book review]

Posted in Statistics with tags , , , , , , , , on May 29, 2023 by xi'an

Number savvy [book review]

Posted in Books, Statistics with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on March 31, 2023 by xi'an

“This book aspires to contribute to overall numeracy through a tour de force presentation of the production, use, and evolution of data.”

Number Savvy: From the Invention of Numbers to the Future of Data is written by George Sciadas, a  statistician working at Statistics Canada. This book is mostly about data, even though it starts with the “compulsory” tour of the invention(s) of numbers and the evolution towards a mostly universal system and the issue of measurements (with a funny if illogical/anti-geographical confusion in “gare du midi in Paris and gare du Nord in Brussels” since Gare du Midi (south) is in Brussels while Gare du Nord (north) in in Paris). The chapter (Chap. 3) on census and demography is quite detailed about the hurdles preventing an exact count of a population, but much less about the methods employed to improve the estimation. (The request for me to fill the short form for the 2023 French Census actually came while I was reading the book!)

The next chapter links measurement with socio-economic notions or models, like unemployment rate, which depends on so many criteria (pp. 77-87) that its measurement sounds impossible or arbitrary. Almost as arbitrary as the reported number of protesters in a French demonstration! Same difficulty with the GDP, whose interpretation seems beyond the grasp of the common reader. And does not cover significantly missing (-not-at-random) data like tax evasion, money laundering, and the grey economy. (Nitpicking: if GDP got down by 0.5% one year and up by 0.5% the year after, this does not exactly compensate!) Chapter 5 reflects upon the importance of definitions and boundaries in creating official statistics and categorical data. A chapter (Chap 6) on the gathering of data in the past (read prior to the “Big Data” explosion) is preparing the ground to the chapter on the current setting. Mostly about surveys, presented as definitely from the past, “shadows of their old selves”. And with anecdotes reminding me of my only experience as a survey interviewer (on Xmas practices!). About administrative data, progressively moving from collected by design to available for any prospection (or “farming”). A short chapter compared with the one (Chap 7) on new data (types), mostly customer, private sector, data. Covering the data accumulated by big tech companies, but not particularly illuminating (with bar-room remarks like “Facebook users tend to portray their lives as they would like them to be. Google searches may reflect more truthfully what people are looking for.”)

The following Chapter 8 is somehow confusing in its defence of microdata, by which I understand keeping the raw data rather than averaging through summary statistics. Synthetic data is mentioned there, but without reference to a reference model, while machine learning makes a very brief appearance (p.222). In Chapter 9, (statistical) data analysis is [at last!] examined, but mostly through descriptive statistics. Except for a regression model and a discussion of the issues around hypothesis testing and Bayesian testing making its unique visit, albeit confusedly in-between references to Taleb’s Black swan, Gödel’s incompleteness theorem (which always seem to fascinate authors of general public science books!), and Kahneman and Tversky’s prospect theory. Somewhat surprisingly, the chapter also includes a Taoist tale about the farmer getting in turns lucky and unlucky… A tale that was already used in What are the chances? that I reviewed two years ago. As this is a very established parable dating back at least to the 2nd century B.C., there is no copyright involved, but what are the chances the story finds its way that quickly in another book?!

The last and final chapter is about the future, unsurprisingly. With prediction of “plenty of black boxes“, “statistical lawlessness“, “data pooling” and data as a commodity (which relates with some themes of our OCEAN ERC-Synergy grant). Although the solution favoured by the author is centralised, through a (national) statistics office or another “trusted third party“. The last section is about the predicted end of theory, since “simply looking at data can reveal patterns“, but resisting the prophets of doom and idealising the Rise of the (AI) machines… The lyrical conclusion that “With both production consolidation and use of data increasingly in the ‘hands’ of machines, and our wise interventions, the more distant future will bring complete integrations” sounds too much like Brave New World for my taste!

“…the privacy argument is weak, if not hypocritical. Logically, it’s hard to fathom what data that we share with an online retailer or a delivery company we wouldn’t share with others (…) A naysayer will say nay.” (p.190)

The way the book reads and unrolls is somewhat puzzling to this reader, as it sounds like a sequence of common sense remarks with a Guesstimation flavour on the side, and tiny historical or technical facts, some unknown and most of no interest to me, while lacking in the larger picture. For instance, the long-winded tale on evaluating the cumulated size of a neighbourhood lawns (p.34-38) does not seem to be getting anywhere. The inclusion of so many warnings, misgivings, and alternatives in the collection and definition of data may have the counter-effect of discouraging readers from making sense of numeric concepts and trusting the conclusions of data-based analyses. The constant switch in perspective(s) and the apparent absence of definite conclusions are also exhausting. Furthermore, I feel that the author and his rosy prospects are repeatedly minimizing the risks of data collection on individual privacy and freedom, when presenting the platforms as a solution to a real time census (as, e.g., p.178), as exemplified by the high social control exercised by some number savvy dictatures!  And he is highly critical of EU regulations such as GDPR, “less-than-subtle” (p.267), “with its huge impact on businesses” (p.268). I am thus overall uncertain which audience this book will eventually reach.

[Disclaimer about potential self-plagiarism: this post or an edited version will potentially appear in my Books Review section in CHANCE.]

The Effect [book review]

Posted in Books, R, Running, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , , , on March 10, 2023 by xi'an

While it sounds like the title of a science-fiction catastrophe novel or of a (of course) convoluted nouveau roman, this book by Nick Huntington-Klein is a massive initiation to econometrics and causality. As explained by the subtitle, An Introduction to Research Design and Causality.

This is a hüûüge book, actually made of two parts that could have been books (volumes?). And covering three langages, R, Stata, and Python, which should have led to three independent books. (Seriously, why print three versions when you need at best one?!)  I carried it with me during my vacations in Central Québec, but managed to loose my notes on the first part, which means missing the opportunity for biased quotes! It was mostly written during the COVID lockdown(s), which may explain for a certain amount of verbosity and rambling around.

“My mom loved the first part of the book and she is allergic to statistics.”

The first half (which is in fact a third!) is conceptual (and chatty) and almost formula free, based on the postulate that “it’s a pretty slim portion of students who understand a method because of an equation” (p.xxii). For this reader (or rather reviewer) and on explanations through example, it makes the reading much harder as spotting the main point gets harder (and requires reading most sentences!). And a very slow start since notations and mathematical notions have to be introduced with an excess of caution (as in the distinction between Latin and Greek symbols, p.36). Moving through single variable models, conditional distributions, with a lengthy explanation of how OLS are derived, data generating process and identification (of causes), causal diagrams, back and front doors (a recurrent notion within the book),  treatment effects and a conclusion chapter.

“Unlike statistical research, which is completely made of things that are at least slightly false, statistics itself is almost entirely true.” (p.327)

The second part, called the Toolbox, is closer to a classical introduction to econometrics, albeit with a shortage of mathematics (and no proof whatsoever), although [warning!] logarithms, polynomials, partial derivatives and matrices are used. Along with a consequent (3x) chunk allocated to printed codes, the density of the footnotes significantly increases in this section. It covers an extensive chapter on regression (including testing practice, non-linear and generalised linear models, as well as basic bootstrap without much warning about its use in… regression settings, and LASSO),  one on matching (with propensity scores, kernel weighting, Mahalanobis weighting, one on  simulation, yes simulation! in the sense of producing pseudo-data from known generating processes to check methods, as well as bootstrap (with resampling residuals making at last an appearance!), fixed and random effects (where the author “feels the presence of Andrew Gelman reaching through time and space to disagree”, p.405). The chapter on event studies is about time dependent data with a bit of ARIMA prediction (but nothing on non-stationary series and unit root issues). The more exotic chapters cover (18) difference-in-differences models (control vs treated groups, with John Snow pumping his way in), (19) instrumental variables (aka the minor bane of my 1980’s econometrics courses), with double least squares and generalised methods of moments (if not the simulated version), (20) discontinuity (i.e., changepoints), with the limitation of having a single variate explaining the change, rather than an unknown combination of them, and a rather pedestrian approach to the issue, (iv) other methods (including the first mention of machine learning regression/prediction and some causal forests), concluding with an “Under the rug” portmanteau.

Nothing (afaict) on multivariate regressed variates and simultaneous equations. Hardly an occurrence of Bayesian modelling (p.581), vague enough to remind me of my first course of statistics and the one-line annihilation of the notion.

Duh cover, but nice edition, except for the huge margins that could have been cut to reduce the 622 pages by a third (and harnessed the tendency of the author towards excessive footnotes!). And an unintentional white line on p.238! Cute and vaguely connected little drawings at the head of every chapter (like the head above). A rather terse matter index (except for the entry “The first reader to spot this wins ten bucks“!), which should have been completed with an acronym index.

“Calculus-heads will recognize all of this as taking integrals of the density curve. Did you know there’s calculus hidden inside statistics? The things your professor won’t tell you until it’s too late to drop the class.

Obviously I am biased in that I cannot negatively comment on an author running 5:37 a mile as, by now, I could just compete far from the 5:15 of yester decades! I am just a wee bit suspicious at the reported time, however, given that it happens exactly on page 537… (And I could have clearly taken issue with his 2014 paper, Is Robert anti-teacher? Or with the populist catering to anti-math attitudes as the above found in a footnote!) But I enjoyed reading the conceptual chapter on causality as well as the (more) technical chapter on instrumental variables (a notion I have consistently found confusing all the [long] way from graduate school). And while repeated references are made to Scott Cunningham’s Causal Inference: The Mixtape I think I will stop there with 500⁺ page introductory econometrics books!

[Disclaimer about potential self-plagiarism: this post or an edited version will potentially appear in my Books Review section in CHANCE.]

statistics for making decisions [book review]

Posted in Statistics, Books with tags , , , , , , , , , , , , on March 7, 2022 by xi'an

I bought this book [or more precisely received it from CRC Press as a ({prospective} book) review reward] as I was interested in the author’s perspectives on actual decision making (and unaware of the earlier Statistical Decision Theory book he had written in 2013). It is intended for a postgraduate semester course and  “not for a beginner in statistics”. Exercises with solutions are included in each chapter (with some R codes in the solutions). From Chapter 4 onwards, the “Further reading suggestions” are primarily referring to papers and books written by the author, as these chapters are based on his earlier papers.

“I regard hypothesis testing as a distraction from and a barrier to good statistical practice. Its ritualised application should be resisted from the position of strength, by being well acquainted with all its theoretical and practical aspects. I very much hope (…) that the right place for hypothesis testing is in a museum, next to the steam engine.”

The first chapter exposes the shortcomings of hypothesis testing for conducting decision making, in particular by ignoring the consequences of the decisions. A perspective with which I agree, but I fear the subsequent developments found in the book remain too formalised to be appealing, reverting to the over-simplification found in Neyman-Pearson theory. The second chapter is somewhat superfluous for a book assuming a prior exposure to statistics, with a quick exposition of the frequentist, Bayesian, and … fiducial paradigms. With estimators being first defined without referring to a specific loss function. And I find the presentation of the fiducial approach rather shaky (if usual). Esp. when considering fiducial perspective to be used as default Bayes in the subsequent chapters. I also do not understand the notation (p.31)

P(\hat\theta<c;\,\theta\in\Theta_\text{H})

outside of a Bayesian (or fiducial?) framework. (I did not spot typos aside from the traditional “the the” duplicates, with at least six occurences!)

The aforementioned subsequent chapters are not particularly enticing as they cater to artificial loss functions and engage into detailed derivations that do not seem essential. At times they appear to be nothing more than simple calculus exercises. The very construction of the loss function, which I deem critical to implement statistical decision theory, is mostly bypassed. The overall setting is also frighteningly unidimensional. In the parameter, in the statistic, and in the decision. Covariates only appear in the final chapter which appears to have very little connection with decision making in that the loss function there is the standard quadratic loss, used to achieve the optimal composition of estimators, rather than selecting the best model. The book is also missing in practical or realistic illustrations.

“With a bit of immodesty and a tinge of obsession, I would like to refer to the principal theme of this book as a paradigm, ascribing to it as much importance and distinction as to the frequentist and Bayesian paradigms”

The book concludes with a short postscript (pp.247-249) reproducing the introducing paragraphs about the ill-suited nature of hypothesis testing for decision-making. Which would have been better supported by a stronger engagement into elicitating loss functions and quantifying the consequences of actions from the clients…

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Book Review section in CHANCE.]

Handbooks [not a book review]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , on October 26, 2021 by xi'an