Winter workshop, Gainesville (day 2)
On day #2, besides my talk on “empirical Bayes” (ABCel) computation (mostly recycled from Varanasi, photos included), Christophe Andrieu gave a talk on exact approximations, using unbiased estimators of the likelihood and characterising estimators garanteeing geometric convergence (bounded weights, essentially, which is a condition popping out again and again in the Monte Carlo literature). Then Art Owen (father of empirical likelihood among other things!) spoke about QMC for MCMC, a topic that always intringued me.
Indeed, while I see the point of using QMC for specific integration problems, I am more uncertain about its relevance for statistics as a simulation device. Having points uniformly distributed over the unit hypercube in a much more efficient way than a random sample is not helping much when only a tiny region of the unit hypercube, namely the one where the likelihood concentrates, matters. (In other words, we are rarely interested in the uniform distribution over the unit hypercube: we instead want to simulate from a highly irregular and definitely concentrated distribution.) I have the same reservation about the applicability of stratified sampling: the strata have to be constructed in relation with the target distribution. The method Art advocates using a CUD (completely uniformly distributed) sequence as the underlying (deterministic) pseudo-unifom sequence. Highly interesting and I want to read the paper in greater details, but the fact that most simulation steps use a random number of uniforms seems detrimental to the performances of the method in general.
After a lunch break at a terrific BBQ place, with a stop at Lake Alice to watch the alligator(s) I had missed during my morning run, I was able this time to attend till the end Xiao-Li Meng’s talk, where he presented new improvements on bridge sampling based on location-scale (or warping) transforms of the original two-samples to make them share mean and variance. Hani Doss concluded the meeting with a talk on the computation of Bayes factors when using (non-parametric) Dirichlet mixture priors, whose resolution does not require simulations for each value of the scale parameter of the Dirichlet prior, thanks to a Radon-Nykodim derivative representation. (Which nicely connected with Art’s talk in that the latter mentioned therein that most simulation methods are actually based on Riemann integration rather than Lebesgue integration. Hani’s representation is not, with nested sampling being another example.)
We ended up the day with a(nother) barbecue outside, under the stars, in the peace and quiet of a local wood, with wine and laughs, just like George would have concluded the workshop. This was a fitting ending to a meeting dedicated to his memory…