Archive for Eiffel Peak

a computational approach to statistical learning [book review]

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , , , , , , , on April 15, 2020 by xi'an

This book was sent to me by CRC Press for review for CHANCE. I read it over a few mornings while [confined] at home and found it much more computational than statistical. In the sense that the authors go quite thoroughly into the construction of standard learning procedures, including home-made R codes that obviously help in understanding the nitty-gritty of these procedures, what they call try and tell, but that the statistical meaning and uncertainty of these procedures remain barely touched by the book. This is not uncommon to the machine-learning literature where prediction error on the testing data often appears to be the final goal but this is not so traditionally statistical. The authors introduce their work as (a computational?) supplementary to Elements of Statistical Learning, although I would find it hard to either squeeze both books into one semester or dedicate two semesters on the topic, especially at the undergraduate level.

Each chapter includes an extended analysis of a specific dataset and this is an asset of the book. If sometimes over-reaching in selling the predictive power of the procedures. Printed extensive R scripts may prove tiresome in the long run, at least to me, but this may simply be a generational gap! And the learning models are mostly unidimensional, see eg the chapter on linear smoothers with imho a profusion of methods. (Could someone please explain the point of Figure 4.9 to me?) The chapter on neural networks has a fairly intuitive introduction that should reach fresh readers. Although meeting the handwritten digit data made me shift back to the late 1980’s, when my wife was working on automatic character recognition. But I found the visualisation of the learning weights for character classification hinting at their shape (p.254) most alluring!

Among the things I am missing when reading through this book, a life-line on the meaning of a statistical model beyond prediction, attention to misspecification, uncertainty and variability, especially when reaching outside the range of the learning data, and further especially when returning regression outputs with significance stars, discussions on the assessment tools like the distance used in the objective function (for instance lacking in scale invariance when adding errors on the regression coefficients) or the unprincipled multiplication of calibration parameters, some asymptotics, at least one remark on the information loss due to splitting the data into chunks, giving some (asymptotic) substance when using “consistent”, waiting for a single page 319 to see the “data quality issues” being mentioned. While the methodology is defended by algebraic and calculus arguments, there is very little on the probability side, which explains why the authors consider that the students need “be familiar  with the concepts of expectation, bias and variance”. And only that. A few paragraphs on the Bayesian approach are doing more harm than well, especially with so little background in probability and statistics.

The book possibly contains the most unusual introduction to the linear model I can remember reading: Coefficients as derivatives… Followed by a very detailed coverage of matrix inversion and singular value decomposition. (Would not sound like the #1 priority were I to give such a course.)

The inevitable typo “the the” was found on page 37! A less common typo was Jensen’s inequality spelled as “Jenson’s inequality”. Both in the text (p.157) and in the index, followed by a repetition of the same formula in (6.8) and (6.9). A “stwart” (p.179) that made me search a while for this unknown verb. Another typo in the Nadaraya-Watson kernel regression, when the bandwidth h suddenly turns into n (and I had to check twice because of my poor eyesight!). An unusual use of partition where the sets in the partition are called partitions themselves. Similarly, fluctuating use of dots for products in dimension one, including a form of ⊗ for matricial product (in equation (8.25)) followed next page by the notation for the Hadamard product. I also suspect the matrix K in (8.68) is missing 1’s or am missing the point, since K is the number of kernels on the next page, just after a picture of the Eiffel Tower…) A surprising number of references for an undergraduate textbook, with authors sometimes cited with full name and sometimes cited with last name. And technical reports that do not belong to this level of books. Let me add the pedant remark that Conan Doyle wrote more novels “that do not include his character Sherlock Holmes” than novels which do include Sherlock.

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE.]

posterior distribution missing the MLE

Posted in Books, Kids, pictures, Statistics with tags , , , , , , , on April 25, 2019 by xi'an

An X validated question as to why the MLE is not necessarily (well) covered by a posterior distribution. Even for a flat prior… Which in restrospect highlights the fact that the MLE (and the MAP) are invasive species in a Bayesian ecosystem. Since they do not account for the dominating measure. And hence do not fare well under reparameterisation. (As a very much to the side comment, I also managed to write an almost identical and simultaneous answer to the first answer to the question.)

Off to Banff!!

Posted in Books, Mountains, R, Statistics, Travel, University life with tags , , , , , , , , , on September 10, 2010 by xi'an

Today I am travelling from Paris to Banff, via Amsterdam and Calgary, to take part in the Hierarchical Bayesian Methods in Ecology two day workshop organised at BIRS by Devin Goodsman (University of Alberta),  François Teste (University of Alberta), and myself. I am very excited both by the opportunity to meet young researchers in ecology and forestry, and by the prospect in spending a few days in the Rockies, hopefully with an opportunity to go hiking, scrambling and even climbing. (Plus the purely random crossing of Julien‘s trip in this area!) The slides will be mostly following those of the course I gave in Aosta, while using Introducing Monte Carlo Methods with R for R practicals:

Bayes in the Rockies

Posted in Books, Mountains, Statistics, University life with tags , , , , , , , , , , on June 25, 2010 by xi'an

A few months ago, following the course I gave last summer in Vale dAosta, I got contacted by Devin Goodsman from the University of Alberta, in order to organise a short course there for environmental scientists in hierarchical Bayes modelling and its R implementation. I (obviously) accepted the invitation and, thanks to Devin and François Teste, we managed to get the short course housed at the wonderful BIRS (Banff International Research Station for Mathematical Innovation and Discovery) centre in Banff, in the very heart of the Canadian Rockies. Last time I was at BIRS, it was four year ago for the 07w5079 Bioinformatics, Genetics and Stochastic Computation: Bridging the Gap workshop, organised jointly with Arnaud Doucet and Raphaël Gottardo from UBC, and I enjoyed the place and organisation immensely. (Not to mention the surroundings, as shown this picture of Mount Temple, climbed in 2002, taken from Eiffel Peak.)

The current workshop, Hierarchical Bayesian Methods in Ecology, is numbered 10w2170 and will take place over the weekend of September 10-12 this Fall.  The schedule is provided on the workshop webpage. I plan to cover Chapters 2, 3 and 4 of Bayesian Core on Saturday, before spending all of Sunday morning and early afternoon on hierarchical models. I actually plan to take advantage of the revision of Bayesian Core in Luminy next week to include a chapter on hierarchical modelling. Now, there are still a few places left in order to reach the 20 participant (strict) upper limit, so, if you are interested in attending the workshop/course, contact me at my gmail email account: bayesianstatistics. (The cost is $220, which includes two nights of lodging at BIRS and a copy of Introducing Monte Carlo Methods with R! Meals and transportation are not included.)

%d bloggers like this: