Scott Schmidler, Steve Scott and myself just submitted a proposal for holding the next World ISBA Conference in 2016 in Banff, Canada! After enjoying the superb environment of the Advanced in Scalable Bayesian computation workshop last week, we thought it would be worth a try as a potential location for the next meeting, esp. when considering the superlative infrastructure of the Banff Centre (meaning we really do not have to be local to be local organisers!), the very reasonable rates for renting the site and securing two hundred rooms, the potential for a special collaboration with BIRS, the scarcity of alternative proposals (as far as I can fathom) and the ultimate mountain environment… I remember fondly the IMS annual meeting of 2002 there, with a great special lecture by Hans Künsch and, exceptionally, an RSS Read Paper by Steve Brooks, Paulo Guidici and Gareth Roberts. (Not mentioning en exhilarating solo scramble up Mount Temple and another one with Arnaud Guillin up the chimneys of Mount Edith!) Since the deadline was this Saturday, March 15, we should hear pretty soon if we are successful in this bid. (Good luck to our Scottish friends from Edinburgh for their bid for holding ISBA 2018! Moving from the feet of Mount Rundle [above] to the feet of Arthur’s Seat would make for a great transition.)
Archive for Canadian Rockies
We have now gone over the midpoint of our workshop Advances in Scalable Bayesian Computation with three talks in the morning and an open research or open air afternoon. (Maybe surprisingly I chose to stay indoors and work on a new research topic rather than trying cross-country skiing!) If I must give a theme for the day, it would be (jokingly) corporate Big data, as the three speakers spoke of problems and solutions connected with Google, Facebook and similar companies. First, Russ Salakhutdinov presented some hierarchical structures on multimedia data, like connecting images and text, with obvious applications on Google. The first part described Boltzman machines with impressive posterior simulations of characters and images. (Check the video at 45:00.) Then Steve Scott gave us a Google motivated entry to embarrassingly parallel algorithms, along the lines of papers recently discussed on the ‘Og. (Too bad we forgot to start the video at the very beginning!) One of the novel things in the talk (for me) was the inclusion of BART in this framework, with the interesting feature that using the whole prior on each machine was way better than using a fraction of the prior, as predicted by the theory! And Joaquin Quinonero Candela provided examples of machine learning techniques used by Facebook to suggest friends and ads in a most efficient way (techniques remaining hidden!).
Even though the rest of the day was free, the two hours of exercising between the pool in the early morning and the climbing wall in the late afternoon left me with no energy to experiment curling with a large subsample of the conference attendees, much to my sorrow!