Archive for typos

ARS: when to update?

Posted in Books, Kids, Statistics, University life with tags , , , , , on May 25, 2017 by xi'an

An email I got today from Heng Zhou wondered about the validity of the above form of the ARS algorithm. As printed in our book Monte Carlo Statistical Methods. The worry is that in the original version of the algorithm the envelope of the log-concave target f(.) is only updated for rejected values. My reply to the question is that there is no difference in the versions towards returning a value simulated from f, since changing the envelope between simulations does not modify the accept-reject nature of the algorithm. There is no issue of dependence between the simulations of this adaptive accept-reject method, all simulations remain independent. The question is rather one about efficiency, namely does it pay to update the envelope(s) when accepting a new value and I think it does because the costly part is the computation of f(x), rather than the call to the piecewise-exponential envelope. Correct me if I am wrong!

a typo that went under the radar

Posted in Books, R, Statistics, University life with tags , , , , , , , on January 25, 2017 by xi'an

A chance occurrence on X validated: a question on an incomprehensible formula for Bayesian model choice: which, most unfortunately!, appeared in Bayesian Essentials with R! Eeech! It looks like one line in our LATEX file got erased and the likelihood part in the denominator altogether vanished. Apologies to all readers confused by this nonsensical formula!

Example 7.3: what a mess!

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , on November 13, 2016 by xi'an

Robert_Casella_RBookA rather obscure question on Metropolis-Hastings algorithms on X Validated ended up being about our first illustration in Introducing Monte Carlo methods with R. And exposing some inconsistencies in the following example… Example 7.2 is based on a [toy] joint Beta x Binomial target, which leads to a basic Gibbs sampler. We thought this was straightforward, but it may confuse readers who think of using Gibbs sampling for posterior simulation as, in this case, there is neither observation nor posterior, but simply a (joint) target in (x,θ).

Example 7.3And then it indeed came out that we had incorrectly written Example 7.3 on the [toy] Normal posterior, using at times a Normal mean prior with a [prior] variance scaled by the sampling variance and at times a Normal mean prior with a [prior] variance unscaled by the sampling variance. I am rather amazed that this did not show up earlier. Although there were already typos listed about that example.Example 7.3 (7.4)

from down-under, Lake Menteith upside-down

Posted in Books, R, Statistics with tags , , , , on January 23, 2013 by xi'an

Lake Menteith Landsat image, as printed in Bayesian Core (2007)The dataset used in Bayesian Core for the chapter on image processing is a Landsat picture of Lake of Menteith in Scotland (close to Loch Lomond). (Yes, Lake of Menteith, not Loch Menteith!) Here is the image produced in the book. I just got an email from Matt Moores at QUT that the image is both rotated and flipped:

The image of Lake Mentieth in figure 8.6 of Bayesian Core is upside-down and back-to-front, so to speak. Also, I recently read a paper by Lionel Cucala & J-M Marin that has the same error.

This is due to the difference between matrix indices and image coordinates: matrices in R are indexed by [row,column] but image coordinates are [x,y]. Also, y=1 is the first row of the matrix, but the bottom row of pixels in an image.

Only a one line change to the R code is required to display the image in the correct orientation:

image(1:100,1:100,t(as.matrix(lm3)[100:1,]),col=gray(256:1/256),xlab="",ylab="")

As can be checked on Googlemap, the picture is indeed rotated by a -90⁰ angle and the transpose correction does the job!

yet more questions about Monte Carlo Statistical Methods

Posted in Books, Statistics, University life with tags , , , , , , , , , , on December 8, 2011 by xi'an

As a coincidence, here is the third email I this week about typos in Monte Carlo Statistical Method, from Peng Yu this time. (Which suits me well in terms of posts as  I am currently travelling to Provo, Utah!)

I’m reading the section on importance sampling. But there are a few cases in your book MCSM2 that are not clear to me.

On page 96: “Theorem 3.12 suggests looking for distributions g for which |h|f/g is almost constant with finite variance.”

What is the precise meaning of “almost constant”? If |h|f/g is almost constant, how come its variance is not finite?

“Almost constant” is not a well-defined property, I am afraid. By this sentence on page 96 we meant using densities g that made |h|f/g as little varying as possible while being manageable. Hence the insistence on the finite variance. Of course, the closer |h|f/g is to a constant function the more likely the variance is to be finite.

“It is important to note that although the finite variance constraint is not necessary for the convergence of (3.8) and of (3.11), importance sampling performs quite poorly when (3.12) ….”

It is not obvious to me why when (3.12) importance sampling performs poorly. I might have overlooked some very simple facts. Would you please remind me why it is the case? From the previous discussion in the same section, it seems that h(x) is missing in (3.12). I think that (3.12) should be (please compare with the first equation in section 3.3.2)

\int h^2(x) f^2(x) / g(x) \text{d}x = + \infty

The preference for a finite variance of f/g and against (3.12) is that we would like the importance function g to work well for most integrable functions h. Hence a requirement that the importance weight f/g itself behaves well. It guarantees some robustness across the h‘s and also avoids checking for the finite variance (as in your displayed equation) for all functions h that are square-integrable against g, by virtue of the Cauchy-Schwarz inequality.

confusing errata in Monte Carlo Statistical Methods

Posted in Books, Statistics, University life with tags , , , , , on December 7, 2011 by xi'an

Following the earlier errata on Monte Carlo Statistical Methods, I received this email from Nick Foti:

I have a quick question about example 8.2 in Monte Carlo Statistical Methods which derives a slice sampler for a truncated N(-3,1) distribution (note, the book says it is a N(3,1) distribution, but the code and derivation are for a N(-3,1)). My question is that the slice set A(t+1) is described as

\{y : f_1(y) \geq u f_1(x^{(t)}) \};

which makes sense if u ~ U(0,1) as it corresponds to the previously described algorithm. However, in the errata for the book it says that u should actually be u(t+1) which according to the algorithm should be distributed as U(0,f1(x)). So unless something clever is being done with ratios of the f1‘s, it seems like the u(t+1) should be U(0,1) in this case, right?

There is indeed a typo in Example 8.4: the mean 3 should be -3… As for the errata, it addresses the missing time indices. Nick is right in assuming that those uniforms are indeed on (0,1), rather than on (0,f1(x)) as in Algorithm A.31. Apologies to all those confused by the errata!

new typos in Monte Carlo Statistical Methods

Posted in Books, Statistics, University life with tags , , , , , , , , on December 7, 2011 by xi'an

Thanks to Jay Bartroff for pointing out those typos after teaching from Monte Carlo Statistical Methods:

  • On page 52, the gamma Ga(α, β) distribution uses β as a rate parameter while in other places it is a scale parameter, see, e.g. eqn (2.2) [correct, I must say the parameterisation of the gamma distribution is a pain and, while we tried to homogenise the notation with the second parameter being the rate, there are places like this where either the rate convention (as in the exponential distribution) or the scale convention (as in the generation) is the natural one…]
  • Still on page 52, in Example 2.20, truncated normals are said to be discussed after Example 1.5, but they’re not. [There is a mention made of constrained parameters right after but this is rather cryptic!]
  • On page 53, the ratio f/gα following the second displayed eqn is missing some terms [or, rather, the equality sign should be a proportional sign]
  • Still on page 53, in eqn (2.11), the whole expression, rather than the square root, should be divided by 2 [yes, surprising typo given that it was derived correctly in the original paper!]
  • On page 92, the exact constraint is that supp(g) actually needs only contain the intersection of supp(f) and supp(h), such as when approximating tail probabilities [correct if the importance sampling method is only used for a single function h, else the condition stands as is]
  • On page 94, fY does not need that integral in the denominator [correct, we already corrected for the truncation by subtracting 4.5 in the exponential]
  • On page 114, Problem 3.22, ωi is missing a factor of 1/n [correct]
  • On page 218, in Example 6.24, P00=0 [indeed, our remark that Pxx>0 should start with x=1. Note that this does not change the aperiodicity, though]
  • On page 282, the log α after the 2nd displayed equation should be eα [correct, this was pointed out in a previous list of typos, but clearly not corrected in the latest printing!]
  • On page 282, in the 5th displayed equation there are missing factors π(α’|b)/π(α0|b) in rejection probability [actually, no, because, those terms being both proposals and priors, they cancel in the ratio. We could add a sentence to this effect to explain why, though.]
  • On page 634, the reference page for exponential distribution is mistakenly given as 99 [wow, very thorough reading! There is an exponential distribution involved on page 99 but I agree this is not the relevant page…]