## the worst possible proof [X’ed]

Posted in Books, Kids, Statistics, University life with tags , , , , , , on July 18, 2015 by xi'an

Another surreal experience thanks to X validated! A user of the forum recently asked for an explanation of the above proof in Lynch’s (2007) book, Introduction to Applied Bayesian Statistics and Estimation for Social Scientists. No wonder this user was puzzled: the explanation makes no sense outside the univariate case… It is hard to fathom why on Earth the author would resort to this convoluted approach to conclude about the posterior conditional distribution being a normal centred at the least square estimate and with σ²X’X as precision matrix. Presumably, he has a poor opinion of the degree of matrix algebra numeracy of his readers [and thus should abstain from establishing the result]. As it seems unrealistic to postulate that the author is himself confused about matrix algebra, given his MSc in Statistics [the footnote ² seen above after “appropriately” acknowledges that “technically we cannot divide by” the matrix, but it goes on to suggest multiplying the numerator by the matrix

$(X^\text{T}X)^{-1} (X^\text{T}X)$

which does not make sense either, unless one introduces the trace tr(.) operator, presumably out of reach for most readers]. And this part of the explanation is unnecessarily confusing in that a basic matrix manipulation leads to the result. Or even simpler, a reference to Pythagoras’  theorem.

## What are the distributions on the positive k-dimensional quadrant with parametrizable covariance matrix?

Posted in Books, pictures, Statistics, University life with tags , , , , , , on March 30, 2012 by xi'an

This is the question I posted this morning on StackOverflow, following an exchange two days ago with a user who could not see why the linear transform of a log-normal vector X,

Y = μ + Σ X

could lead to negative components in Y…. After searching a little while, I could not think of a joint distribution on the positive k-dimensional quadrant where I could specify the covariance matrix in advance. Except for a pedestrian construction of (x1,x2) where x1 would be an arbitrary Gamma variate [with a given variance] and x2 conditional on x1 would be a Gamma variate with parameters specified by the covariance matrix. Which does not extend nicely to larger dimensions.

## Matrix tricks [by Puntanen, Styan, & Isolato]

Posted in Books, pictures, Statistics, University life with tags , , , , on October 15, 2011 by xi'an

While I have not read it but only browsed through, the book Matrix Tricks for Linear Statistical Models:  Our Personal Top Twenty by Simo Puntanen, George P. H. Styan and Jarkko Isotalo, sounds like a very enjoyable book! Containing not only tricks, but also stories, pictures, and even stamps! Simo is the current editor of the book review section of the International Statistical Review, who is handling with much benevolence my book reviews, while George was for many years the fantastic editor of the IMS Bulletin. (Just to warn the readership I may be slightly biased in my evaluation!)