Bayesian computational tools
I just arXived a survey entitled Bayesian computational tools in connection with a chapter the editors of the Annual Review of Statistics and Its Application asked me to write. (A puzzling title, I would have used Applications, not Application. Puzzling journal too: endowed with a prestigious editorial board, I wonder at the long-term perspectives of the review, once “all” topics have been addressed. At least, the “non-profit” aspect is respected: $100 for personal subscriptions and $250 for libraries, plus a one-year complimentary online access to volume 1.) Nothing terribly novel in my review, which illustrates some computational tool in some Bayesian settings, missing five or six pages to cover particle filters and sequential Monte Carlo. I however had fun with a double-exponential (or Laplace) example. This distribution indeed allows for a closed-form posterior distribution on the location parameter under a normal prior, which can be expressed as a mixture of truncated normal distributions. A mixture of (n+1) normal distributions for a sample of size n. We actually noticed this fact (which may already be well-known) when looking at our leading example in the consistent ABC choice paper, but it vanished from the appendix in the later versions. As detailed in the previous post, I also fought programming issues induced by this mixture, due to round-up errors in the most extreme components, until all approaches provided similar answers.
November 29, 2014 at 4:58 pm
Do you have more info on how to get a closed form posterior for the location parameter of a Laplace likelihood with a normal prior. I am running a Gibbs sampler for a Laplace/Normal mixture model and without a closed form posterior it’s quite inefficient. Any further sources you might have would be very helpful. Thanks in advance.
November 29, 2014 at 8:23 pm
Maybe I should write a separate paper about this: it is in the appendix of the first version of our recent Series B paper. All you have to do is remove the absolute value by breaking the posterior into a sum of truncated Gaussians.
April 18, 2013 at 6:34 am
There’s a minor typo on p2: When describing the demarginalisation procedure the density in the integral sign is f(x, z | theta) but on the following line is described as g(x, z | theta).