Archive for typos

a glaring mistake

Posted in Statistics with tags , , , , , , on November 28, 2018 by xi'an

Someone posted this question about Bayes factors in my book on Saturday morning and I could not believe the glaring typo pointed out there had gone through the centuries without anyone noticing! There should be no index 0 or 1 on the θ’s in either integral (or indices all over). I presume I made this typo when cutting & pasting from the previous formula (which addressed the case of two point null hypotheses), but I am quite chagrined that I sabotaged the definition of the Bayes factor for generations of readers of the Bayesian Choice. Apologies!!!

I thought I did make a mistake but I was wrong…

Posted in Books, Kids, Statistics with tags , , , , , , , , , , , , on November 14, 2018 by xi'an

One of my students in my MCMC course at ENSAE seems to specialise into spotting typos in the Monte Carlo Statistical Methods book as he found an issue in every problem he solved! He even went back to a 1991 paper of mine on Inverse Normal distributions, inspired from a discussion with an astronomer, Caroline Soubiran, and my two colleagues, Gilles Celeux and Jean Diebolt. The above derivation from the massive Gradsteyn and Ryzhik (which I discovered thanks to Mary Ellen Bock when arriving in Purdue) is indeed incorrect as the final term should be the square root of 2β rather than 8β. However, this typo does not impact the normalising constant of the density, K(α,μ,τ), unless I am further confused.

approximative Laplace

Posted in Books, R, Statistics with tags , , , , on August 18, 2018 by xi'an

I came across this question on X validated that wondered about one of our examples in Monte Carlo Statistical Methods. We have included a section on Laplace approximations in the Monte Carlo integration chapter, with a bit of reluctance on my side as this type of integral approximation does not directly connect to Monte Carlo methods. Even less in the case of the example as we aimed at replacing a coverage probability for a Gamma distribution with a formal Laplace approximation. Formal due to the lack of asymptotics, besides the length of the interval (a,b) which probability is approximated. Hence, on top of the typos, the point of the example is not crystal clear, in that it does not show much more than the step-function approximation to the function converges as the interval length gets to zero. For instance, using instead a flat approximation produces an almost as good approximation:

>  xact(5,2,7,9)
[1] 0.1933414
> laplace(5,2,7,9)
[1] 0.1933507
> flat(5,2,7,9)
[1] 0.1953668

What may be more surprising is the resilience of the approximation as the width of the interval increases:

> xact(5,2,5,11)
[1] 0.53366
> lapl(5,2,5,11)
[1] 0.5354954
> plain(5,2,5,11)
[1] 0.5861004
> quad(5,2,5,11)
[1] 0.434131

ARS: when to update?

Posted in Books, Kids, Statistics, University life with tags , , , , , on May 25, 2017 by xi'an

An email I got today from Heng Zhou wondered about the validity of the above form of the ARS algorithm. As printed in our book Monte Carlo Statistical Methods. The worry is that in the original version of the algorithm the envelope of the log-concave target f(.) is only updated for rejected values. My reply to the question is that there is no difference in the versions towards returning a value simulated from f, since changing the envelope between simulations does not modify the accept-reject nature of the algorithm. There is no issue of dependence between the simulations of this adaptive accept-reject method, all simulations remain independent. The question is rather one about efficiency, namely does it pay to update the envelope(s) when accepting a new value and I think it does because the costly part is the computation of f(x), rather than the call to the piecewise-exponential envelope. Correct me if I am wrong!

a typo that went under the radar

Posted in Books, R, Statistics, University life with tags , , , , , , , on January 25, 2017 by xi'an

A chance occurrence on X validated: a question on an incomprehensible formula for Bayesian model choice: which, most unfortunately!, appeared in Bayesian Essentials with R! Eeech! It looks like one line in our LATEX file got erased and the likelihood part in the denominator altogether vanished. Apologies to all readers confused by this nonsensical formula!

Example 7.3: what a mess!

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on November 13, 2016 by xi'an

Robert_Casella_RBookA rather obscure question on Metropolis-Hastings algorithms on X Validated ended up being about our first illustration in Introducing Monte Carlo methods with R. And exposing some inconsistencies in the following example… Example 7.2 is based on a [toy] joint Beta x Binomial target, which leads to a basic Gibbs sampler. We thought this was straightforward, but it may confuse readers who think of using Gibbs sampling for posterior simulation as, in this case, there is neither observation nor posterior, but simply a (joint) target in (x,θ).

Example 7.3And then it indeed came out that we had incorrectly written Example 7.3 on the [toy] Normal posterior, using at times a Normal mean prior with a [prior] variance scaled by the sampling variance and at times a Normal mean prior with a [prior] variance unscaled by the sampling variance. I am rather amazed that this did not show up earlier. Although there were already typos listed about that example.Example 7.3 (7.4)

from down-under, Lake Menteith upside-down

Posted in Books, R, Statistics with tags , , , , on January 23, 2013 by xi'an

Lake Menteith Landsat image, as printed in Bayesian Core (2007)The dataset used in Bayesian Core for the chapter on image processing is a Landsat picture of Lake of Menteith in Scotland (close to Loch Lomond). (Yes, Lake of Menteith, not Loch Menteith!) Here is the image produced in the book. I just got an email from Matt Moores at QUT that the image is both rotated and flipped:

The image of Lake Mentieth in figure 8.6 of Bayesian Core is upside-down and back-to-front, so to speak. Also, I recently read a paper by Lionel Cucala & J-M Marin that has the same error.

This is due to the difference between matrix indices and image coordinates: matrices in R are indexed by [row,column] but image coordinates are [x,y]. Also, y=1 is the first row of the matrix, but the bottom row of pixels in an image.

Only a one line change to the R code is required to display the image in the correct orientation:

image(1:100,1:100,t(as.matrix(lm3)[100:1,]),col=gray(256:1/256),xlab="",ylab="")

As can be checked on Googlemap, the picture is indeed rotated by a -90⁰ angle and the transpose correction does the job!