## Two more handbook chapters

**A**s mentioned in my earlier post, I had to write a revised edition to my chapter Bayesian Computational Methods in the * Handbook of Computational Statistics* (second edition), edited by J. Gentle, W. Härdle and Y. Mori. And in parallel I was asked for a second chapter in a handbook on risk analysis,

**, edited by Klaus Böckner. So, on Friday, I went over the first edition of this chapter of the**

*Bayesian methods and expert elicitation**and added the most recent developments I deemed important to mention, like ABC, as well as recent SMC and PMC algorithms, increasing the length by about ten pages. Simultaneously, Jean-Michel Marin completed my draft for the other handbook and I submitted both chapters, as well as arXived one and then the other.*

**Handbook of Computational Statistics****I**t is somehow interesting (on a lazy blizzardly Sunday afternoon with nothing better to do!) to look for the differences between those chapters aiming at the same description of important computational techniques for Bayesian statistics (and based on the same skeleton). The first chapter is broader and, with its 60 pages, it functions as a (very) short book on the topic. Given that the first version was written in 2003, the focus is more on latent variables with mixture models being repeatedly used as examples. Reversible jump also stands preeminently. In my opinion, it reads well and could be used as a primary entry for a short formal course on computational methods. (Even though ** Introducing Monte Carlo Methods with R** is presumably more appropriate for a short course.)

**T**he second chapter started from the skeleton of the earlier version of the first chapter with the probit model as the benchmark example. I worked on a first draft during the last vacations and then Jean-Michel took over to produce this current version, where reversible jump has been removed and ABC introduced with greater details. In particular, we used a very special version of ABC with the probit model, resorting to the distance between the expectations of the binary observables, namely

where is the MLE of based on the observations, instead of the difference between the simulated and the observed binary observables

which incorporates a useless randomness. With this choice, and when using for a .01 quantile, the difference with the true posterior on is very small, as shown by the figure (obtained for the Pima Indian dataset in R). Obviously, this stabilising trick only works in specific situations where a predictive of sorts can be computed.

May 25, 2011 at 8:09 am

[…] is not much originality in the survey as it is mostly inspired from older chapters written for handbooks. I wished I had more space to cover particle MCMC in a few pages but 12 pages was the upper […]