Dan Simpson’s seminar at CREST

Daniel Simpson gave a seminar at CREST yesterday on his recently arXived paper, “Penalising model component complexity: A principled, practical  approach to constructing priors” written with Thiago Martins, Andrea Riebler, Håvard Rue, and Sigrunn Sørbye. Paper that he should also have given in Banff last month had he not lost his passport in København airport…  I have already commented at length on this exciting paper, hopefully to become a discussion paper in a top journal!, so I am just pointing out two things that came to my mind during the energetic talk delivered by Dan to our group. The first thing is that those penalised complexity (PC) priors of theirs rely on some choices in the ordering of the relevance, complexity, nuisance level, &tc. of the parameters, just like reference priors. While Dan already wrote a paper on Russian roulette, there is also a Russian doll principle at work behind (or within) PC priors. Each shell of the Russian doll corresponds to a further level of complexity whose order need be decided by the modeller… Not very realistic in a hierarchical model with several types of parameters having only local meaning.

My second point is that the construction of those “politically correct” (PC) priors reflects another Russian doll structure, namely one of embedded models, hence would and should lead to a natural multiple testing methodology. Except that Dan rejected this notion during his talk, by being opposed to testing per se. (A good topic for one of my summer projects, if nothing more, then!)

One Response to “Dan Simpson’s seminar at CREST”

  1. Dan Simpson Says:

    Thanks Christian! I really enjoyed giving the talk to such a lively audience!

    On the first point, you’re right. It’s really hard to do in general!

    The disease mapping example (the BYM model) we have in the paper is nice as it has a (relatively) local component (made up of the sum of the structured and unstructured effects) that is controlled by two parameters. And by thinking about what the model component is supposed to do, we can see that there is a better parameterisation in terms of a variance parameter (turning the tap on/off) and a mixing parameter that controls how much structure is in the component (changing the balance of hot and cold water). This has a sensible implicit order. Models can be written in order of complexity as:
    nothing -> iid -> structured.
    And because of this hierarchy of “base” models and the natural, interpretable ordering, we can set good priors for this local component without needing to know the whole global model. (In the Germany example, there is a separate spline model on the covariate, so this isn’t a one component graphical model.

    “Each shell of the Russian doll corresponds to a further level of complexity whose order need be decided by the modeller… Not very realistic in a hierarchical model with several types of parameters having only local meaning.”

    But we also needed to know *a lot* about what the model component does. I would argue that that’s neither a feature nor a bug. It’s a reality. Parameterisation in a way that facilitates sensible priors is a modelling issue and needs to be handled by the modeller. So I’m not sure I agree with the above quote. But if it’s possible to do for “general” (or some, or a few) components what we did for the BYM model, I think local specifications are still possible. In the end, what I hope this paper does is give the modeller some idea of what to look for when parameterising models and how to then set some useful priors.

    Beyond this, finding approximately orthogonal parameterisations and setting independent priors is the best we’ve done so far…

    (There’s almost certainly some link here with the work Cox and Reid did arguing for orthogonal parameterisations as a way to facilitate parameter estimation)

    There really is *so much* exiting work to be done here (can you tell I’m still excited by this paper!), but I think some things are common to any method for setting priors on multiple parameters. And there are definitely questions about how to really stack these together across complex (object oriented) graphical models. In this set up, there’s also interesting questions on how strongly to penalise components of models that are over-specified. (We talked a little about mixture models…)

    As for testing, well, it hasn’t come up yet for me and I’m not planning on going hunting for problems. It’s one of those things I just don’t find very interesting (like, say, football) But you never know…. (and there are lots of really smart people who are interested in it, so I’m pretty sure the area will survive without my amateur musings ;p )

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.