## Bolztmann optimisation as simulating device

Posted in Books, Statistics, University life with tags , , , , , , on June 18, 2020 by xi'an

“The problem of drawing samples from a discrete distribution can be converted into a discrete optimization problem” Maddison et al., 2014

I recently learned about the Gumbel-Max “trick” proposed by Chris Maddison, Daniel Tarlow, and Tom Minka in a 2014 NIPS talk. Namely that, to generate from a Boltzmann distribution

$p_j=\frac{\exp\{g_j\}}{\sum_i \exp\{g_i\}}$

is equivalent to adding standard Gumbel noise to the energies and taking the maximum. A rare (?) instance, compared with the reverse of using simulation to reach maxima. Of course, this requires as many simulations as there as terms in the sum. Or a clever way to avoid this exhaustive listing.

“According to Gumbel’s statistics, 326 out of 354 political murders by right-wing factions in the early Weimar Republic went unpunished, and four out of the 22 left-wing capital crimes.” Science News

As an historical aside I discovered Gumbel’s anti-Nazi activism while in Germany in the 1920’s and early 1930’s (until expelled from the University of Heidelberg). Including the 1932 call against Nazis (which Albert Einstein and Heinrich Mann also signed), hence the above poster.

## efficient acquisition rules for ABC

Posted in pictures, Statistics, University life with tags , , , , , , , , on June 5, 2017 by xi'an

A few weeks ago, Marko Järvenpää, Michael Gutmann, Aki Vehtari and Pekka Marttinen arXived a paper on sampling design for ABC that reminded me of presentations Michael gave at NIPS 2014 and in Banff last February. The main notion is that, when the simulation from the model is hugely expensive, random sampling does not make sense.

“While probabilistic modelling has been used to accelerate ABC inference, and strategies have been proposed for selecting which parameter to simulate next, little work has focused on trying to quantify the amount of uncertainty in the estimator of the ABC posterior density itself.”

The above question  is obviously interesting, if already considered in the literature for it seems to focus on the Monte Carlo error in ABC, addressed for instance in Fearnhead and Prangle (2012), Li and Fearnhead (2016) and our paper with David Frazier, Gael Martin, and Judith Rousseau. With corresponding conditions on the tolerance and the number of simulations to relegate Monte Carlo error to a secondary level. And the additional remark that the (error free) ABC distribution itself is not the ultimate quantity of interest. Or the equivalent (?) one that ABC is actually an exact Bayesian method on a completed space.

The paper initially confused me for a section on the very general formulation of ABC posterior approximation and error in this approximation. And simulation design for minimising this error. It confused me as it sounded too vague but only for a while as the remaining sections appear to be independent. The operational concept of the paper is to assume that the discrepancy between observed and simulated data, when perceived as a random function of the parameter θ, is a Gaussian process [over the parameter space]. This modelling allows for a prediction of the discrepancy at a new value of θ, which can be chosen as maximising the variance of the likelihood approximation. Or more precisely of the acceptance probability. While the authors report improved estimation of the exact posterior, I find no intuition as to why this should be the case when focussing on the discrepancy, especially because small discrepancies are associated with parameters approximately generated from the posterior.

## ABC à… Montréal

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , on August 24, 2015 by xi'an

Like last year, NIPS will be hosted in Montréal, Québec, Canada, and like last year there will be an ACB NIPS workshop. With a wide variety of speakers and poster presenters. There will also be a probabilistic integration NIPS workshop, to which I have been invited to give a talk, following my blog on the topic! Workshops are on December 11 and 12, and I hope those two won’t overlap so that I can enjoy both at length (before flying back to London for CFE 2015…)

Update: they do overlap, both being on December 11…

## NIPS 2014

Posted in Kids, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on December 15, 2014 by xi'an

Second and last day of the NIPS workshops! The collection of topics was quite broad and would have made my choosing an ordeal, except that I was invited to give a talk at the probabilistic programming workshop, solving my dilemma… The first talk by Kathleen Fisher was quite enjoyable in that it gave a conceptual discussion of the motivations for probabilistic languages, drawing an analogy with the early days of computer programming that saw a separation between higher level computer languages and machine programming, with a compiler interface. And calling for a similar separation between the models faced by statistical inference and machine-learning and the corresponding code, if I understood her correctly. This was connected with Frank Wood’s talk of the previous day where he illustrated the concept through a generation of computer codes to approximately generate from standard distributions like Normal or Poisson. Approximately as in ABC, which is why the organisers invited me to talk in this session. However, I was a wee bit lost in the following talks and presumably lost part of my audience during my talk, as I realised later to my dismay when someone told me he had not perceived the distinction between the trees in the random forest procedure and the phylogenetic trees in the population genetic application. Still, while it had for me a sort of Twilight Zone feeling of having stepped in another dimension, attending this workshop was an worthwhile experiment as an eye-opener into a highly different albeit connected field, where code and simulator may take the place of a likelihood function… To the point of defining Hamiltonian Monte Carlo directly on the former, as Vikash Mansinghka showed me at the break.

I completed the day with the final talks in the variational inference workshop, if only to get back on firmer ground! Apart from attending my third talk by Vikash in the conference (but on a completely different topic on variational approximations for discrete particle-ar distributions), a talk by Tim Salimans linked MCMC and variational approximations, using MCMC and HMC to derive variational bounds. (He did not expand on the opposite use of variational approximations to build better proposals.) Overall, I found these two days and my first NIPS conference quite exciting, if somewhat overpowering, with a different atmosphere and a different pace compared with (small or large) statistical meetings. (And a staggering gender imbalance!)

## further up North

Posted in pictures, Travel, University life with tags , , , , , , , on December 14, 2014 by xi'an