In my book review of the recent book by Dirk Kroese and Joshua Chan, Statistical Modeling and Computation, I mistakenly and persistently typed the name of the second author as Joshua Chen. This typo alas made it to the printed and on-line versions of the subsequent CHANCE 27(2) column. I am thus very much sorry for this mistake of mine and most sincerely apologise to the authors. Indeed, it always annoys me to have my name mistyped (usually as Roberts!) in references. [If nothing else, this typo signals it is high time for a change of my prescription glasses.]
Archive for typo
There might be two typos in the final second moment formula and its derivation (assuming no silly symmetric mistakes in my validation code): the first ν ought to be -ν, and there should be a corresponding scaling factor also for the boundary μ in Pμ,ν-2 since it arises from a change of variable. Btw in the text reference to Fig. 2 |X| wasn’t updated to X+. I hope that this is of some use.
and I checked that indeed I had forgotten the scale factor ν/(ν-2) in the t distribution with ν-2 degrees of freedom as well as the sign… So I modified the note and rearXived it. Sorry about this lack of attention to the derivation!
Yesterday while in Philadelphia airport I got this email from Javier Rubio:
I am a first year PhD student in the University of Warwick. I was reading your paper “Properties of nested sampling” which I found very interesting. I have a question about it. Is the second equation in pp. 747 correct?
I think this equation is related with the first equation in pp. 12 from your paper “Importance sampling methods for Bayesian discrimination between embedded models”.
Indeed, there is a typo in that this formula (page 747) should be
(I checked my LaTeX code and there is a trace of a former \dfrac that got erased but was not replaced with a \big/ symbol… I am quite sorry for the typo, the more because this paper went through many revisions.) There is no typo in the corresponding chapter of Frontiers of Statistical Decision Making and Bayesian Analysis: In Honor of James O. Berger
Edward Kao just sent another typo found both in Monte Carlo Statistical Methods (Problem 3.21) and in Introducing Monte Carlo Methods with R (Exercise 3.17), namely that should be I also got another email from Jerry Sin mentioning that matrix summation in the matrix commands of Figure 1.2 of Introducing Monte Carlo Methods with R should be matrix multiplication. And asking for an errata sheet on the webpage of the books, which is clearly necessary and overdue! Here are also a few more typos found by Pierre Jacob and Robin Ryder when working on the translation of Introducing Monte Carlo Methods with R:
This morning I received the following email
(…) I have a question regarding an algorithm in one of your papers, “Bayesian Modelling and Inference on Mixtures of Distributions“. On page 33, in the Metropolis-Hastings algorithm for the mixture you accept the proposal if r < u. As I understand the MH algorithm you accept the proposal with probability r (technically min(r,1)), so I would expect that you accept if u < r. I cannot see or find a reason elsewhere why r < u works? If you could clarify why r < u works for the MH algorithm I would really appreciate it. (…)
which rightly points out an embarrassing typo in our mixture survey, published in the Handbook of Statistics, volume 25. Indeed, the inequality should be the reverse, , as in the other algorithmic boxes of the survey.
I have two questions that have puzzled me for a while. I hope you can shed some lights. They are all about Example 3.6 of your book.
1. On page 74, there is a term x(1-x) for m(x). This is fine. But the term disappeared from (3.5) on p.75. My impression is that this is not a typo. There must be a reason for its disappearance. Can you elaborate?
I am alas afraid this is a plain typo, where I did not report the x(1-x) from one page to the next.
2. On page 75, you have the term “den=dt(normx,3)”. My impression is that you are using univariate t with 3 degrees of freedom to approximate. I thought formally you need to use a bivariatet with 3 degrees of freedom to do the importance sampling. Why would normx=sqrt(x[,1]^2+x[,2]^2) along with a univariate t work?
This is a shortcut that would require more explanation. While the two-dimensional t sample is y, a linear transform of the isotonic x, it is possible to express the density of y via the one-dimensional t density, hence the apparent confusion between univariate and bivariate t densities…
- p.137 second equation from bottom
[right, another victim of cut-and-paste]
- p. 138 Example 5.7 denominator in the gradient should be 2*beta [yes, the error actually occurs twice. And once again in the R code]
- p. 138 : First paragraph Not a typo but a lack of details: are the conditions on and necessary and sufficient? [indeed, they are sufficient]
- demo(Chapter.5) triggers an error message [true, the shortcut max=TRUE instead of maximise=TRUE in optimise does not work with R version 2.11.1]