JSM 2010 [day 1]

The first day at JSM is always a bit sluggish, as people slowly drip in and get their bearings. Similar to last year in Washington D.C., the meeting takes place in a huge conference centre and thus there is no feeling of overcrowded [so far]. It may also be that the peripheric and foreign location of the meeting put some regular attendees off (not to mention the expensive living costs!).

Nonetheless, the Sunday afternoon sessions started with a highly interesting How Fast Can We Compute? How Fast Will We Compute? session organised by Mike West and featuring Steve Scott, Mark Suchard and Qanli Wang. The topic was on parallel processing, either via multiple processors or via GPUS, the later relating to the exciting talk Chris Holmes gave at the Valencia meeting. Steve showed us some code in order to explain how feasible the jump to parallel programming—a point demonstrated by Julien Cornebise and Pierre Jacob after they returned from Valencia—was, while stressing the fact that a lot of the processing in MCMC runs was opened to parallelisation. For instance, data augmentation schemes can allocate the missing data in a parallel way in most problems and the same for independent data likelihood computation. Marc Suchard focussed on GPUs and phylogenetic trees, both of high interest to me!, and he stressed the huge gains—of the order of hundreds in the decrease in computing time—made possible by the exploitation of laptop [Macbook] GPUs. (If I got his example correctly, he seemed to be doing an exact computation of the phylogeny likelihood, not an ABC approximation… Which is quite interesting, if potentially killing one of my main areas of research!) Qanli Wang linked both previous with the example of mixtures with a huge number of components. Plenty of food for thought.

I completed the afternoon session with the Student Paper Competition: Bayesian Nonparametric and Semiparametric Methods which was discouragingly empty of participants, with two of the five speakers missing and less than twenty people in the room. (I did not get the point about the competition as to who was ranking those papers. Not the participants apparently!)

6 Responses to “JSM 2010 [day 1]”

  1. […] for parallel processing Given the growing interest in parallel processing through GPUs or multiple processors, there is a clear need for a proper use of (uniform) random number generators in this environment. […]

  2. […] possibilities from Chris Holmes’ talk in Valencia. (As well as directions drafted in an exciting session in Vancouver!) The (free) gains over standard independent Metropolis-Hastings estimates are […]

  3. This is the second-best attended JSM ever, and certainly one of the highest quality. I am surprised more people are not blogging it, but I guess that’s the price of paid internet access.

  4. @Jon: actually, the Macbook was just an example to show how omnipresent GPUs are nowadays — although the model on his Macbook only has 16 CUDA cores. From the article Suchard and Rambaut (2009), they used 3 GPUs on one computer, the three of them being Nvidia Geforce 280GTX, with 240 CUDA cores each. The equivalent pricerange nowadays is the Geforce 470, with 480 CUDA cores and 1.5GB of DRAM5 (faster than the DRAM2 of the GTX 280) and with a new architecture named “Fermi” which enables a few more high-level computations — main impact being the more widespread usability of double precision floating points (which, although present in the former architecture, seems to be more emphasized in Fermi).
    Hope this helps :)

  5. S. A. Khan Says:

    “I did not get the point about the competition as to who was ranking those papers. Not the participants apparently!”

    Thanks for pointing it out. I was one of the participants, and still do not know how the articles were evaluated in that session, and who was the winner.

  6. Interesting note about the Macbook GPU — do you know which Macbook he was referring to? I suspect an integrated GPU won’t be able to provide the hundred fold increase he alludes to.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.