Archive for design

Principles of Applied Statistics

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on February 13, 2012 by xi'an

This book by David Cox and Christl Donnelly, Principles of Applied Statistics, is an extensive coverage of all the necessary steps and precautions one must go through when contemplating applied (i.e. real!) statistics. As the authors write in the very first sentence of the book, “applied statistics is more than data analysis” (p.i); the title could indeed have been “Principled Data Analysis”! Indeed, Principles of Applied Statistics reminded me of how much we (at least I) take “the model” and “the data” for granted when doing statistical analyses, by going through all the pre-data and post-data steps that lead to the “idealized” (p.188) data analysis. The contents of the book are intentionally simple, with hardly any mathematical aspect, but with a clinical attention to exhaustivity and clarity. For instance, even though I would have enjoyed more stress on probabilistic models as the basis for statistical inference, they only appear in the fourth chapter (out of ten) with error in variable models. The painstakingly careful coverage of the myriad of tiny but essential steps involved in a statistical analysis and the highlight of the numerous corresponding pitfalls was certainly illuminating to me.  Just as the book refrains from mathematical digressions (“our emphasis is on the subject-matter, not on the statistical techniques as such p.12), it falls short from engaging into detail and complex data stories. Instead, it uses little grey boxes to convey the pertinent aspects of a given data analysis, referring to a paper for the full story. (I acknowledge this may be frustrating at times, as one would like to read more…) The book reads very nicely and smoothly, and I must acknowledge I read most of it in trains, métros, and planes over the past week. (This remark is not  intended as a criticism against a lack of depth or interest, by all means [and medians]!)

A general principle, sounding superficial but difficult to implement, is that analyses should be as simple as possible, but not simpler.” (p.9)

To get into more details, Principles of Applied Statistics covers the (most!) purposes of statistical analyses (Chap. 1), design with some special emphasis (Chap. 2-3), which is not surprising given the record of the authors (and “not a moribund art form”, p.51), measurement (Chap. 4), including the special case of latent variables and their role in model formulation, preliminary analysis (Chap. 5) by which the authors mean data screening and graphical pre-analysis, [at last!] models (Chap. 6-7), separated in model formulation [debating the nature of probability] and model choice, the later being  somehow separated from the standard meaning of the term (done in §8.4.5 and §8.4.6), formal [mathematical] inference (Chap. 8), covering in particular testing and multiple testing, interpretation (Chap. 9), i.e. post-processing, and a final epilogue (Chap. 10). The readership of the book is rather broad, from practitioners to students, although both categories do require a good dose of maturity, to teachers, to scientists designing experiments with a statistical mind. It may be deemed too philosophical by some, too allusive by others, but I think it constitutes a magnificent testimony to the depth and to the spectrum of our field.

Of course, all choices are to some extent provisional.“(p.130)

As a personal aside,  I appreciated the illustration through capture-recapture models (p.36) with a remark of the impact of toe-clipping on frogs, as it reminded me of a similar way of marking lizards when my (then) student Jérôme Dupuis was working on a corresponding capture-recapture dataset in the 90’s. On the opposite, while John Snow‘s story [of using maps to explain the cause of cholera] is alluring, and his map makes for a great cover, I am less convinced it is particularly relevant within this book.

The word Bayesian, however, became more widely used, sometimes representing a regression to the older usage of flat prior distributions supposedly representing initial ignorance, sometimes meaning models in which the parameters of interest are regarded as random variables and occasionaly meaning little more than that the laws of probability are somewhere invoked.” (p.144)

My main quibble with the book goes, most unsurprisingly!, with the processing of Bayesian analysis found in Principles of Applied Statistics (pp.143-144). Indeed, on the one hand, the method is mostly criticised over those two pages. On the other hand, it is the only method presented with this level of details, including historical background, which seems a bit superfluous for a treatise on applied statistics. The drawbacks mentioned are (p.144)

  • the weight of prior information or modelling as “evidence”;
  • the impact of “indifference or ignorance or reference priors”;
  • whether or not empirical Bayes modelling has been used to construct the prior;
  • whether or not the Bayesian approach is anything more than a “computationally convenient way of obtaining confidence intervals”

The empirical Bayes perspective is the original one found in Robbins (1956) and seems to find grace in the authors’ eyes (“the most satisfactory formulation”, p.156). Contrary to MCMC methods, “a black box in that typically it is unclear which features of the data are driving the conclusions” (p.149)…

If an issue can be addressed nonparametrically then it will often be better to tackle it parametrically; however, if it cannot be resolved nonparametrically then it is usually dangerous to resolve it parametrically.” (p.96)

Apart from a more philosophical paragraph on the distinction between machine learning and statistical analysis in the final chapter, with the drawback of using neural nets and such as black-box methods (p.185), there is relatively little coverage of non-parametric models, the choice of “parametric formulations” (p.96) being openly chosen. I can somehow understand this perspective for simpler settings, namely that nonparametric models offer little explanation of the production of the data. However, in more complex models, nonparametric components often are a convenient way to evacuate burdensome nuisance parameters…. Again, technical aspects are not the focus of Principles of Applied Statistics so this also explains why it does not dwell intently on nonparametric models.

A test of meaningfulness of a possible model for a data-generating process is whether it can be used directly to simulate data.” (p.104)

The above remark is quite interesting, especially when accounting for David Cox’ current appreciation of ABC techniques. The impossibility to generate from a posited model as some found in econometrics precludes using ABC, but this does not necessarily mean the model should be excluded as unrealistic…

The overriding general principle is that there should be a seamless flow between statistical and subject-matter considerations.” (p.188)

As mentioned earlier, the last chapter brings a philosophical conclusion on what is (applied) statistics. It is stresses the need for a careful and principled use of black-box methods so that they preserve a general framework and lead to explicit interpretations.

Bargain!

Posted in Books, Statistics with tags , , , , , , , on May 8, 2010 by xi'an

Amazon is currently selling The Bayesian Choice at a bargain price of  $32.97, a rebate of 40% on the list price! And this is the hardcover version… When I first saw the hardcover versions, I thought the printer had made a mistake but this is not a mistake! (While I did not design the cover of this third version of The Bayesian Choice, this is my favorite one. Good thing I did not turn to design rather than statistics.) The Bayesian Choice is thus cheaper than  Introducing Monte Carlo Methods with R despite being more than twice longer and hardcover… Note also that Springer introduced a paperback version of Monte Carlo Statistical Methods but it is surprisingly more expensive than the hardcover on Amazon ($109 versus $76).

JSM 2009 impressions [day 2]

Posted in Books, Statistics, University life with tags , , , , , , , , , , on August 4, 2009 by xi'an

Julien Cornebise wrote his impressions on yesterday [day 2] as comments to day 1 and he is welcome as a guest editor! I completely agree with his views on George Casella’s Medallion Lecture on design, which emphasized the need to reconsider this somehow neglected part of the Statistics curriculum. George’s lecture was both passionate and broad, which made it accessible to the large audience there. It was based on his Statistical Design book, on sale at the Springer Verlag booth in the Exhibit hall when you go to check the Enigma machine at the NSA booth. Along with a whole table of new books in the Use R! series, soon to be augmented by our book Introducing Monte Carlo Methods with R with George Casella, which is available in a draft version at the booth. (We actually signed the contract for Introducing Monte Carlo Methods with R with Springer yesterday afternoon.) The Springer editor, John Kimmel, is one of the ASA Fellows this year, in recognition of his support of the dissemination of new ideas in Statistics (my wording) and this is a great initiative from the ASA committee on Fellows as he unreservedly deserves it, if only for launching the Use R! series!

PICT6678

As mentioned by Julien, the session on the future of Statistics was reserved to the happy “fews” who managed to get a seat and others had to stay in the “present” thanks to this safety regulation that seems to be implemented on some talks/rooms and not others. I passed the first people being stopped by a fierce guard on my way to the “past”, ie to the cosmology and astrophysics session. There, I enjoyed very much Larry Wasserman’s talk on Nonparametric estimation of filaments for uncovering a challenging problem as well as for his elegant resolution of the problem. As well as the presentation by Laura Cayon of Detection of weak lensing, where I discovered that my old Purdue friend Anirban das Gupta was also involved in cosmology. I also went to the Monte Carlo and Sequential Analyses: Methods and Applications session, organised by Mike West, but the talks were too short to make much of an impact on me, even though I appreciated the talk by Minghui Shi on Particle stochastic search for high-dimensional variable selection that linked with Nicolas Chopin’s early work on exploring a large dataset and I was also intrigued by the talk of Ioanna Manolopoulou on Targeted sequential resampling from large Data sets in mixture modeling for using proxies to the real mixture model. The day ended up with a Board meeting for ISBA, that unfortunately took place outside in a hot humid weather… I now have to get ready for the Gertrude Cox Scholarship 5k race, since it starts at 6:15am (yes, am!).