Bayesian modeling using WinBUGS

Yes, yet another Bayesian textbook: Ioannis Ntzoufras’ Bayesian modeling using WinBUGS was published in 2009 and it got an honourable mention at the 2009 PROSE Award. (Nice acronym for a book award! All the mathematics books awarded that year were actually statistics books.) Bayesian modeling using WinBUGS is rather similar to the more recent Bayesian ideas and data analysis that I reviewed last week and hence I am afraid the review will draw a comparison between both books. (Which is a bit unfair to Bayesian modeling using WinBUGS since I reviewed Bayesian ideas and data analysis  on its own! However, I will presumably write my CHANCE column as a joint review.)

As history has proved, the main reason why Bayesian theory was unable to establish a foothold as a well accepted quantitative approach for data analysis was the intractability involved in the calculation of the posterior distribution.” Chap. 1, p.1

The book launches into a very quick introduction to Bayesian analysis, since, by page 15, we are “done” with linear regression and conjugate priors. This is somehow softened by the inclusion at the end of the chapter of a few examples, including one on the Greek football  team in Euro 2004, but nothing comparable with Christensen et al.’s initial chapter of motivating examples. Chapter 2 on MCMC methods follows the same pattern:  a quick and dense introduction in about ten pages, followed by 40 pages of illuminating examples, worked out in full detail. CODA is described in an Appendix. Compared with Bayesian ideas and data analysis, Bayesian modeling using WinBUGS spends time introducing WinBUGS and Chapter 3 acts like a 20 page user manual, while Chapter 4 corresponds to the WinBUGS example manual. Chapter 5 gets back to a more statistical aspect, the processing of regression models (including Zellner’s g-prior). up to ANOVA. Chapter 6 extends the previous chapter to categorical variables and the ANCOVA model, as well as the 2006-2007 English premier league. Chapter 7 moves to the standard generalised linear models, with an extension in Chapter 8 to count data, zero inflated models, and survival data. Chapter 9 covers hierarchical models, with mixed models, longitudinal data, and the water polo World Cup 2000.

Although this [the harmonic mean] estimator is simple, it is quite unstable and sensitive to small likelihood values and hence is not recommended.” Chap. 11, p. 393

While most chapters rely on DIC for model comparison, the last two chapters of Bayesian modeling using WinBUGS open on other model comparison approaches like the posterior predictive p-value, residual values, cross-validation (with, once again!, the dreaded harmonic mean estimator! and, once again, Geisser and Eddy’s conditional predictive ordinates), keeping the introduction of Bayes factors for Chapter 11, with an immediate criticism through the Jeffreys-Lindley-Bartlett paradox, maybe because “Bayes factors cannot be generally calculated within WinBUGS unless sophisticated approaches are used” (p.390). Surprisingly, and as clearly stated in the above quote, the computational section warns about the poor performances of the harmonic mean estimator without making the connection with the earlier proposal of the very same estimator (p.375). After reviewing the most standard approaches for marginal approximation, Ioannis Ntzoufras falls back on a Laplace approximation to the likelihood function. This chapter also covers variable selection by Gibbs sampling, stochastic search, the Carlin and Chib (1995) method and reversible jump MCMC, the later being expedited in half a page! It concludes with the (non-Bayesian) information criteria, AIC and BIC.

Bayesian statistics suddenly became fashionable, opening new highways for statistical research.” Chap. 1, p.2

On the material (!) side, while the presentation is overall very nice, I dislike the fonts (which are imposed by J.Wiley, as I remember for our mixture book) and the fact that the text within a page seems to have slided down to the bottom: I mean, each page of text ends up (or down) very close to the physical bottom of the page. Nothing important, obviously, but a slight impression of cramming… (See, e.g., pages 3 or 38, where a subscript would have been sticking out of the book!) A further nitpicking remark is that the examples start as indented and then loose their indentation after a paragraph or two, which does not help in identifying examples as a whole within the text. I like the idea of highlighting R/WinBUGS code with grey background (as we did in Bayesian Core), however the rendering of the two-column opposition of algorithm and R code is unfortunately difficult to read.  Some graphs are given as screen copies, which reduce their readability for no proper reason. Also, the price of the book ($130, $102 on amazon) is a wee higher than similar books in the area (Bayesian ideas and data analysis is only $62, Bayesian data analysis is only $62, the Bayesian choice a mere $37…, but this seems to be a publisher’s policy, witness Bolstad’s Introduction to Bayesian Statistics at the same $102.)

All predictive diagnostics presented above have the disadvantage of double usage of the data.” Chap. 10, p.375

In conclusion, and in reflection with their respective titles, Bayesian modeling using WinBUGS feels more technical than  Bayesian ideas and data analysis, even though their coverage is in fine very similar. Not only do they both insist on methodology much more than theory, but they also similarly emphasize the application aspect through numerous examples based on real data. The later is slightly more philosophical and for this reason (as well as typographical comfort) more to my own personal subjective taste. I figure the choice of one versus the other as a textbook will very much depend on the intended audience. More mature statistical students may favour Bayesian ideas and data analysis, while more applied students could benefit more from Bayesian modeling using WinBUGS.

(Note: this book is not to be confused with the very recent Bayesian population modeling using WinBUGS, by Marc Kéry and Michael Schaub, which is about Bayesian analysis for ecology and which I am looking forward reading…)

8 Responses to “Bayesian modeling using WinBUGS”

  1. On the off chance someone sees this, I have a question regarding CPO. The easy estimate of the CPO is apparently based on a “dreaded” harmonic mean estimator which in general have been thoroughly discredited.

    In the case of the CPO, though, Gelfand and Dey argue that it is fine because you are effectively using the posterior distribution to approximate the leave-one-out posterior distribution, which seems somewhat reasonable to me (though it isn’t hard to imagine some kind of catastrophe). By contrast, the harmonic mean estimator of the marginal is effectively using the prior to approximate the posterior which is much more dubious.

    Any guidance? Is it terrible and I should stay away, or maybe ok?

    • Thanks! Without remembering all the details, it seems fine as a computational device. Now, whether the (CPO) tool is fully Bayesian is another issue…

  2. We should add to the review that Bayesian Modeling Using Winbugs has an insane amount of typo errors.

  3. […] some empirical Bayes procedures are asymptotically convergent. The pseudo-marginal likelihood of Geisser and Eddy (1979), used in  Bayesian ideas and data analysis, is defined […]

  4. […] involved the pseudo-Bayes factors (CPO) of Geisser and Eddy discussed recently in connection with the book reviews of both Bayesian ideas and data analysis and Bayesian modeling using WinBUGS. Unfortunately, again […]

  5. Your review part of the books in this blog is a great idea. I like it very much. Please keep this way for many other books. Thanks

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: