Bayes-250, Edinburgh [day 2]

After a terrific run this morning to the top of Arthur’s Seat, and then around (the ribs are feeling fine, now!), the Bayes-250 talks were exhilarating and challenging. Jim Smith gave an introduction to the challenges of getting different experts to collaborate on a complex risk assessment, much in the spirit of his book, that got me wondering about experts with their own agenda/utility function. For instance, in the case of the recent Fukushima disaster, experts from the electricity company could not be expected to provide trustworthy answers… This meant the assessor’s loss function had to account for this bias in the experts’  opinions. John Winn (from Microsoft Cambridge) argued about developing probabilistic programming, which meant incorporating functions and types like

  • random (along a specific probability distribution);
  • constrain (meaning including data constraints);
  • infer (deriving the posterior distribution);

to the standard programming languages. The idea is conceptually interesting, and the notion of linking Thomas Bayes with Ada Byron Lovelace promising in a steampunk universe, however I remain unconvinced by the universality of the target, as approximations such as EP and variational Bayes need to be introduced for the fast computation of the posterior distribution. Peter Green presented an acceleration device for simulating over decomposable graphs, thanks to the use of the junction tree. Zoubin Gharhamani recalled the origins of the Indian buffet process and provided on the go a list of anti-Bayesian myths that was quite worth posting. Neil Lawrence showed us an interesting work on latent forces, in the mechanical sense, even though I could not keep track with the mechanics behind it! In the afternoon, Michael Goldstein told us why Bayes theorem does not work (not that I agree with him on that point!), Peggy Series explained how, fascinatingly, the brain processes information in a Bayesian manner with an “optimal” prior, and Nicolas Chopin talked about his expectation-propagation summary-less likelihood-free algorithm I discussed a few days ago. We also had a lively discussion about ABC model choice, from the choice of the metric distance to the impact of the summary statistics. The meeting ended with Andrew Fraser telling us about Thomas Bayes’ studies at Edinburgh in 1719-1721 and a quick stroll through the Old College. The day ended in a rather surprising pub, The Blind Poet, and a so-so south Indian restaurant, when we most unexpectedly bumped into Marc Suchard, visiting collaborators in Edinburgh!

6 Responses to “Bayes-250, Edinburgh [day 2]”

  1. […] in a Bayesian fashion, actualising predictions based on current observations (as exposed at Bayes 250), but also that the updating is not “objective”! While this may sound as if the […]

  2. […] question as to whether (and then how) Hume and Bayes could have interacted. (When Bayes studied in Edinburgh in the 1720′s, Hume was less than 12…) […]

  3. […] book inevitably starts with the (patchy) story of Thomas Bayes’s life, incl. his passage in Edinburgh, and a nice non-mathematical description of his ball experiment, the next chapter is about […]

  4. […] the coincidence of bumping into Marc Suchard in an Edinburgian Indian restaurant on Tuesday night, I faced another if much less pleasant coincidental event: for the third time in a […]

  5. David Rohde Says:

    I would be really interested if you would like to expand on your comments or criticism about Bayes Linear… I have been looking for a commentary on this from a more traditional Bayesian perspective for some time…

    • The criticism was rather about Michael’s radical dismissal of the posterior probability (resulting from Bayes’ theorem) as a conditional probability because of the dependence on the model. And of the potential meaningless of P(B|A) when A proved to be wrong prior to B occurring. While I cannot say I completely followed Michael’s arguments, they do sound too radical: For one thing I did not see how an expectation could save the day when, mathematically, there is an equivalence between the definition of a measure and the corresponding definition of expectations for measurable functions. For another, the fact that some entities are model dependent is a non sequitur issue: in model choice settings, the encompassing measure takes care of this. Bayes linear is a different thing: it can be seen as a robust Bayes of sorts, requiring a smaller input at the prior modelling level.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.