JSM 2009 impressions [day 1]

Last afternoon, I attended the Bayesian model choice invited session, with talks by David Madigan, Hani Doss and Hugh Chipman. The room was packed and the talks were quite interesting. Madigan et al’s Sequential Bayesian Model Selection uses Mike West’s dynamic parameters to allow for more adaptive models, with the drawback of a lack of stabilisation in the estimation of the parameter which reflects the time-varying modelling. (I could not really understand why the model choice approach could not handle the hidden Markov chain of the model labels, though.)

The second talk, Doss’ Estimation of Large Families of Bayes Factors from Markov Chain Output, was more central to my interests (and to my own talk). and had several innovations worth mentioning. One was the use of the Bayes factor for prior comparison, which is something I never thought of before. In a strictly Bayesian perspective, this is not surprising as it means using the data to compare your priors! Completely un-orthodox! At a deeper level, I am still wondering at the validation of the Bayes factor in this setting… The second innovation was that, thanks to the same likelihood appearing in both numerator and denominator, Hani Doss was able to use a new type of bridge sampling estimator due to the identity

\mathbb{E}_{\pi_2(\cdot|x)} \left[ \dfrac{\pi_1(\theta)}{\pi_2(\theta)} \mid x \right] = \dfrac{m_1(x)}{m_2(x)}

and thus, using a posterior sample from one posterior is sufficient for approximating the Bayes factor in this case. A third innovation was using the control variates

\dfrac{\pi_i(\theta) - \pi_j(\theta)}{\pi_1(\theta) f(x|\theta)}

since they always are unbiased estimators of zero. A point I need to investigate further is how using improper priors (the example in Hani Doss’ talk was based on Zellner’s g-prior) is possible in this regard.

The talk by Hugh Chipman was about the BART (Bayesian additive regression tree) “machine learning” technique of Chpiman, George and McCulloch, which is always impressive as a completely non-parametric regression method. The innovation was in using many trees (50,200) simultaneously in the model, not as in model averaging, but as a basis for more complex dependences. The differences in the influence of the covariates seems to vane away as the number of trees goes up, but, as discussed during this session, this may be due to the frequency of uses of a covariate being too crude an indicator.

I then went to Aad van der Vart’s Le Cam lecture, on Some Frequentist Results on Posterior Distributions on Infinite-Dimensional Parameter Spaces, who, as usual, managed to give a very methodical and clear overview of the results on the asymptotics of Bayesian non-parametric estimators.

The meeting is, as forecasted, a monster of a meeting, but the conference center is so huge that it is not overwhelming. The JSM staff is quite efficient and they managed to solve the issue of the Series B editors meeting almost immediately.

5 Responses to “JSM 2009 impressions [day 1]”

  1. I actually talked with Hani Doss about the improper part yesterday and he confirmed he had to use proper priors for the control variate part to apply.

  2. Forgot to mention my anticipation for the other IMS Medalion lecture by Alistair Sinclair on Markov Chain Monte Carlo in Theoretical Computer Science. These great talks are definitely a reason to endure the somehow train-station-like atmosphere…. along with the tremendous (although totally childlike) thrill to use an original Enigma machine on NSA booth — go and try it.

  3. On JSM Day 2, the special prize goes to the guards who block access to the conference rooms once all the chairs are filled up: nobody is allowed to stand in the back, for “fire regulation” purposes. Picture yourself, for Wellner and Efron’s talk, 20 to 30 outraged statisticians blocked out of the amphitheater, forbidden to attend the talk. The guard called for backup and litterally called by talkie walkie for a *bouncer* (!) to keep the door ! Total nonsense.

    Appart from that, nice talk from both Wellner and Efron on the “future of statistics”, with a nice look back on a conference on the same topic by Le Cam and Savage in 1965 and how their predicitons came to life. Another very nice IMS Medalion lecture by Casella this morning: optimal design for microarrays, or how the good old methods (BIBDs) ought to be remembered for cutting edge applications. Looking forward to Gareth Robert’s IMS Medallion lecture !

    Last but not least, it is an fascinating occasion to discover statistics from a widely different angle, with the many non-mathematical sessions, such as “Protecting Individual Privacy in the Struggle Against Terrorism”. Excellent intervention by Steven Fienberg, whose ideas are as refreshing as his outspokenness — especially after a quite polished (although informative) summary of the eponymous NRC report.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.