ICERM, Brown, Providence, RI (#1)

As I mentioned yesterday, and earlier, I was rather excited by the visit of the ICERM building. As it happens, the centre is located at the upper floor of a (rather bland!) 11 floor building sitting between Main St. and the river. It is quite impressive indeed, with a feeling of space due to the high ceilings and the glass walls all around the conference room, plus pockets of quietness with blackboards at the rescue. The whiteboard that makes the wall between the conference room and the lobby is also appreciable for discussion as it is huge (the whole wall is the whiteboard!) and made of a glassy material that makes writing on it a true pleasure (the next step would be to have a recording device embedded in it!). When I gave my talk and attended the other three talks of the day, I kind of regretted that the dual projector system would not allow for a lag of sorts in the presentation. Even though the pace of the other talks was quite reasonable (mine was a bit hurried I am afraid!), writing down a few notes was enough for me to miss some point from the previous slide. With huge walls, it should be easy to project at least the previous slide at the same time and maybe even all of the previous slide (maybe, maybe not, as it would get quickly confusing…)

Paul Dupuis’ talk covered new material (at least for me) on importance sampling for diffusions and the exploration of equilibriums, and it was thus quite enjoyable, even when fighting one of my dozing attacks. Gareth Roberts’ talk provided a very broad picture of the different optimal scalings (à la 0.234!) for MCMC algorithms (while I have attended several lectures by Gareth on this theme, there is always something new and interesting coming out of them!). Krzysztof Latuszynski’s talk on irreducible diffusions and the construction of importance sampling solutions replacing the (unavailable) exact sampling of Beskos et al. (2006) led to some discussion on the handling of negative weights. This is a question that has always intrigued me: if unbiasedness or exact simulation or something else induce negative weights in a sample, how can we process those weights when resampling? The conclusion of the discussion was that truncating the weights to zero seemed like the best solution, at least when resampling since the weights can be used as such in averages, but I wonder if there is a more elaborate scheme involving mixtures or whatnot!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.