workshop in Columbia [day 2]

The second day at the workshop was closer to my research topics and thus easier to follow, if equally enjoyable compared with yesterday: Jun Liu’s talk went over his modification of the Clifford-Fearnhead particle algorithm in great details, Sam Kou explained how a simulated annealing algorithm could make considerable improvement in the prediction of the 3D structure of molecules, Jeff Rosenthal showed us the recent results on and applications of adaptive MCMC, Gareth Roberts detailed his new results on the exact simulation of diffusions, and Xiao-Li Meng went back to his 2002 Read Paper to explain how we should use likelihood principles in Monte Carlo as well. And convince me I was “too young” to get the whole idea! (As I was a discussant of this paper.) All talks were thought-provoking and I enjoyed very much Gareth’s approach and description of the algorithm (as did the rest of the audience, to the point of asking too many questions during the talk!). However, the most revealing talk was Xiao-Li’s in that he did succeed in convincing me of the pertinence of his “unknown measure” approach thanks to a multiple mixture example where the actual mixture importance sampler

\dfrac{1}{n}\sum_{i=1}^n \dfrac{q(x_i)}{\sum \pi_j p_j(x_i)}

gets dominated by the estimated mixture version

\dfrac{1}{n}\sum_{i=1}^n \dfrac{q(x_i)}{\sum \hat\pi_j p_j(x_i)}

Even though I still remain skeptical by the group averaging perspective, for the same reason as earlier that the group is not acting in conjunction with the target function. Hence averaging over transforms of no relevance for the target. Nonetheless, the idea of estimating the best “importance function” based on the simulated values rather than using the genuine importance function is quite a revelation, linking with an earlier question of mine (and others) on the (lack of) exploitation of the known values of the target at the simulated points. (Maybe up to a constant.) Food for thought, certainly… In memory of this discussion, here is a picture [of an ostrich] my daughter drew at the time for my final slide in London:

5 Responses to “workshop in Columbia [day 2]”

  1. […] area of interest/expertise, with Paul Dupuis giving a talk in the same spirit as the one he gave in New York last September. using large deviations and importance sampling on diffusions. Both following talks […]

  2. […] obviously relate to the idea of turning simulation into a measure estimation issue, discussed in a post of mine after the Columbia workshop. This interweaving paper also brings back memories of the fantastic […]

  3. […] Xi'an's Og an attempt at bloggin, from scratch… « workshop in Columbia [day 2] […]

  4. I don’t know whether this is related to the talk of Meng but at least superficially such ideas have also appeared in “Importance Sampling via the Estimated Sampler” by Hemi, Yoshida and Eguchi published Biometrika, 2007.

    • Thanks, Arnaud. I will take a look. There was another paper in Biometrika in the 1990’s by Goffinet that estimated the acceptance-reject probability, maybe related as well…

Leave a Reply to Arnaud Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.