Archive for censoring

did I mean endemic? [pardon my French!]

Posted in Books, Statistics, University life with tags , , , , , , , , , , , on June 26, 2014 by xi'an

clouds, Nov. 02, 2011Deborah Mayo wrote a Saturday night special column on our Big Bayes stories issue in Statistical Science. She (predictably?) focussed on the critical discussions, esp. David Hand’s most forceful arguments where he essentially considers that, due to our (special issue editors’) selection of successful stories, we biased the debate by providing a “one-sided” story. And that we or the editor of Statistical Science should also have included frequentist stories. To which Deborah points out that demonstrating that “only” a frequentist solution is available may be beyond the possible. And still, I could think of partial information and partial inference problems like the “paradox” raised by Jamie Robbins and Larry Wasserman in the past years. (Not the normalising constant paradox but the one about censoring.) Anyway, the goal of this special issue was to provide a range of realistic illustrations where Bayesian analysis was a most reasonable approach, not to raise the Bayesian flag against other perspectives: in an ideal world it would have been more interesting to get discussants produce alternative analyses bypassing the Bayesian modelling but obviously discussants only have a limited amount of time to dedicate to their discussion(s) and the problems were complex enough to deter any attempt in this direction.

As an aside and in explanation of the cryptic title of this post, Deborah wonders at my use of endemic in the preface and at the possible mis-translation from the French. I did mean endemic (and endémique) in a half-joking reference to a disease one cannot completely get rid of. At least in French, the term extends beyond diseases, but presumably pervasive would have been less confusing… Or ubiquitous (as in Ubiquitous Chip for those with Glaswegian ties!). She also expresses “surprise at the choice of name for the special issue. Incidentally, the “big” refers to the bigness of the problem, not big data. Not sure about “stories”.” Maybe another occurrence of lost in translation… I had indeed no intent of connection with the “big” of “Big Data”, but wanted to convey the notion of a big as in major problem. And of a story explaining why the problem was considered and how the authors reached a satisfactory analysis. The story of the Air France Rio-Paris crash resolution is representative of that intent. (Hence the explanation for the above picture.)

Typo in MCSM [bis]

Posted in Books, Statistics, University life with tags , , , on July 19, 2010 by xi'an

Doug Rivers from Stanford sent me the following email:

On p. 175 of Monte Carlo Statistical Methods, shouldn’t the last displayed equation just be

L(\theta|y) = \int_\mathcal{Z} L^c(\theta|y,z) \text{d}z

I don’t see how you get

L(\theta|y) = \mathbb{E}[L^c(\theta|y,Z)].

Doug is completely right: the expectation, as written is incorrect. The difficulty with Example 5.14 was also pointed out in an earlier post. Alas, the resolution in this first post was  just as confusing as the mistake itself! (I have just updated the post to remove the confusion.) There is no expectation involved in this likelihood because the z_i‘s are truncated in a. Their density is therefore a renormalised version of f(z-\theta)… I now think the whole example should be rewritten because it starts as if m observations were uncensored out of n, only to move to a fixed censoring bound a. While both likelihoods are proportional when a=y_m, this confusion still is a bad idea!!!