Archive for International Statistical Review

someone who might benefit from increased contacts with the statistical community

Posted in Books, Statistics with tags , , , , , on July 23, 2012 by xi'an

A (kind of automated) email I got today:

Your name has come to our attention as someone who might benefit from increased contacts with the international statistical community. Given your professional interests and your statistical background (noting your publication ‘Reading Keynes’ Treatise on Probability’ in the journal International Statistical Review, volume 79, 2011), you should consider elected membership in the International Statistical Institute (ISI).

Hmmm, thanks but no thanks, I am not certain I need become a member of the ISI to increase my contacts with the international statistical community! (Disclaimer: This post makes fun of the anonymous emailing, not of the ISI!)

Confidence distributions

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , on June 11, 2012 by xi'an

I was asked by the International Statistical Review editor, Marc Hallin, for a discussion of the paper “Confidence distribution, the frequentist distribution estimator of a parameter — a review” by Min-ge Xie and Kesar Singh, both from Rutgers University. Although the paper is not available on-line, similar and recent reviews and articles can be found, in an 2007 IMS Monograph and a 2012 JASA paper both with Bill Strawderman, as well as a chapter in the recent Fetschrift for Bill Strawderman. The notion of confidence distribution is quite similar to the one of fiducial distribution, introduced by R.A. Fisher, and they both share in my opinion the same drawback, namely that they aim at a distribution over the parameter space without specifying (at least explicitly) a prior distribution. Furthermore, the way the confidence distribution is defined perpetuates the on-going confusion between confidence and credible intervals, in that the cdf on the parameter θ is derived via the inversion of a confidence upper bound (or, equivalently, of a p-value…) Even though this inversion properly defines a cdf on the parameter space, there is no particular validity in the derivation. Either the confidence distribution corresponds to a genuine posterior distribution, in which case I think the only possible interpretation is a Bayesian one. Or  the confidence distribution does not correspond to a genuine posterior distribution, because no prior can lead to this distribution, in which case there is a probabilistic impossibility in using this distribution.  Thus, as a result, my discussion (now posted on arXiv) is rather negative about the benefits of this notion of confidence distribution.

One entry in the review, albeit peripheral, attracted my attention. The authors mention a tech’ report where they exhibit a paradoxical behaviour of a Bayesian procedure: given a (skewed) prior on a pair (p0,p1), and a binomial likelihood, the posterior distribution on p1-p0 has its main mass in the tails of both the prior and the likelihood (“the marginal posterior of d = p1-p0 is more extreme than its prior and data evidence!”). The information provided in the paper is rather sparse on the genuine experiment and looking at two possible priors exhibited nothing of the kind… I went to the authors’ webpages and found a more precise explanation on Min-ge Xie’s page:

Although the contour plot of the posterior distribution sits between those of the prior distribution and the likelihood function, its projected peak is more extreme than the other two. Further examination suggests that this phenomenon is genuine in binomial clinical trials and it would not go away even if we adopt other (skewed) priors (for example, the independent beta priors used in Joseph et al. (1997)). In fact, as long as the center of a posterior distribution is not on the line joining the two centers of the joint prior and likelihood function (as it is often the case with skewed distributions), there exists a direction along which the marginal posterior fails to fall between the prior and likelihood function of the same parameter.

and a link to another paper. Reading through the paper (and in particular Section 4), it appears that the above “paradoxical” picture is the result of the projections of the joint distributions represented in this second picture. By projection, I presume the authors mean integrating out the second component, e.g. p1+p0. This indeed provides the marginal prior of p1-p0, the marginal posterior of p1-p0, but…not the marginal likelihood of p1-p0! This entity is not defined, once again because there is no reference measure on the parameter space which could justify integrating out some parameters in the likelihood. (Overall, I do not think the “paradox” is overwhelming: the joint posterior distribution does precisely the merging of prior and data information we would expect and it is not like the marginal posterior is located in zones with zero prior probability and zero (profile) likelihood. I am also always wary of arguments based on modes, since those are highly dependent on parameterisation.)

Most unfortunately, when searching for more information on the authors’ webpages, I came upon the sad news that Professor Singh had passed away three weeks ago, at the age of 56.  (Professor Xie wrote a touching eulogy of his friend and co-author.) I had only met briefly with Professor Singh during my visit to Rutgers two months ago, but he sounded like an academic who would have enjoyed the kind of debate drafted by my discussion. To the much more important loss to family, friends and faculty represented by Professor Singh demise, I thus add the loss of missing the intellectual challenge of crossing arguments with him. And I look forward discussing the issues with the first author of the paper, Professor Xie.

A misleading title…

Posted in Books, R, Statistics, University life with tags , , , , , , , , , , on September 5, 2011 by xi'an

When I received this book, Handbook of fitting statistical distributions with R, by Z. Karian and E.J. Dudewicz,  from/for the Short Book Reviews section of the International Statistical Review, I was obviously impressed by its size (around 1700 pages and 3 kilos…). From briefly glancing at the table of contents, and the list of standard distributions appearing as subsections of the first chapters, I thought that the authors were covering different estimation/fitting techniques for most of the standard distributions. After taking a closer look at the book, I think the cover is misleading in several aspects: this is not a handbook (a.k.a. a reference book), it does not cover standard statistical distributions, the R input is marginal, and the authors only wrote part of the book, since about half of the chapters are written by other authors…

Continue reading

Numerical analysis for statisticians

Posted in Books, R, Statistics, University life with tags , , , , , , , , , on August 26, 2011 by xi'an

“In the end, it really is just a matter of choosing the relevant parts of mathematics and ignoring the rest. Of course, the hard part is deciding what is irrelevant.”

Somehow, I had missed the first edition of this book and thus I started reading it this afternoon with a newcomer’s eyes (obviously, I will not comment on the differences with the first edition, sketched by the author in the Preface). Past the initial surprise of discovering it was a mathematics book rather than an algorithmic book, I became engrossed into my reading and could not let it go! Numerical Analysis for Statisticians, by Kenneth Lange, is a wonderful book. It provides most of the necessary background in calculus and some algebra to conduct rigorous numerical analyses of statistical problems. This includes expansions, eigen-analysis, optimisation, integration, approximation theory, and simulation, in less than 600 pages. It may be due to the fact that I was reading the book in my garden, with the background noise of the wind in tree leaves, but I cannot find any solid fact to grumble about! Not even about  the MCMC chapters! I simply enjoyed Numerical Analysis for Statisticians from beginning till end.

“Many fine textbooks (…) are hardly substitutes for a theoretical treatment emphasizing mathematical motivations and derivations. However, students do need exposure to real computing and thoughtful numerical exercises. Mastery of theory is enhanced by the nitty gritty of coding.” 

From the above, it may sound as if Numerical Analysis for Statisticians does not fulfill its purpose and is too much of a mathematical book. Be assured this is not the case: the contents are firmly grounded in calculus (analysis) but the (numerical) algorithms are only one code away. An illustration (among many) is found in Section 8.4: Finding a Single Eigenvalue, where Kenneth Lange shows how the Raleigh quotient algorithm of the previous section can be exploited to this aim, when supplemented with a good initial guess based on Gerschgorin’s circle theorem. This is brilliantly executed in two pages and the code is just one keyboard away. The EM algorithm is immersed into a larger M[&]M perspective. Problems are numerous and mostly of high standards, meaning one (including me) has to sit and think about them. References are kept to a minimum, they are mostly (highly recommended) books, plus a few papers primarily exploited in the problem sections. (When reading the Preface, I found that “John Kimmel, [his] long suffering editor, exhibited extraordinary patience in encouraging [him] to get on with this project”. The quality of Numerical Analysis for Statisticians is also a testimony to John’s editorial acumen!)

“Every advance in computer architecture and software tempts statisticians to tackle numerically harder problems. To do so intelligently requires a good working knowledge of numerical analysis. This book equips students to craft their own software and to understand the advantages and disadvantages of different numerical methods. Issues of numerical stability, accurate approximation, computational complexity, and mathematical modeling share the limelight in a broad yet rigorous overview of those parts of numerical analysis most relevant to statisticians.”

While I am reacting so enthusiastically to the book (imagine, there is even a full chapter on continued fractions!), it may be that my French math background is biasing my evaluation and that graduate students over the World would find the book too hard. However, I do not think so: the style of Numerical Analysis for Statisticians is very fluid and the rigorous mathematics are mostly at the level of undergraduate calculus. The more advanced topics like wavelets, Fourier transforms and Hilbert spaces are very well-introduced and do not require prerequisites in complex calculus or functional analysis. (Although I take no joy in this, even measure theory does not appear to be a prerequisite!) On the other hand, there is a prerequisite for a good background in statistics. This book will clearly involve a lot of work from the reader, but the respect shown by Kenneth Lange to those readers will sufficiently motivate them to keep them going till assimilation of those essential notions. Numerical Analysis for Statisticians is also recommended for more senior researchers and not only for building one or two courses on the bases of statistical computing. It contains most of the math bases that we need, even if we do not know we need them! Truly an essential book.

key[ed/nes] in!

Posted in Books, Statistics, University life with tags , , , , on November 19, 2010 by xi'an

Great news in the mail today: my revision of Keynes’ A Treatise on Probability has been accepted by the International Statistical Review! With a very nice message from the editor:

It is an excellent revision and has addressed all the important points and more. I must also compliment you on your fluid and interesting writing style. It makes for very nice reading.

(In fact, this review of Keynes’ book is my first publication in this journal. This irrelevant point of information reminds me of an equally unimportant but enjoyable discussion Andrew Gelman and I had in the IHP cafeteria last year about the merits of publishing in new journals… )

Continue reading

Follow

Get every new post delivered to your Inbox.

Join 701 other followers