Archive for free energy

asymptotically exact inference in likelihood-free models

Posted in Books, pictures, Statistics with tags , , , , , , , on November 29, 2016 by xi'an

“We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs.”

Following my earlier post on that paper by Matt Graham and Amos Storkey (University of Edinburgh), I now read through it. The beginning is somewhat unsettling, albeit mildly!, as it starts by mentioning notions like variational auto-encoders, generative adversial nets, and simulator models, by which they mean generative models represented by a (differentiable) function g that essentially turn basic variates with density p into the variates of interest (with intractable density). A setting similar to Meeds’ and Welling’s optimisation Monte Carlo. Another proximity pointed out in the paper is Meeds et al.’s Hamiltonian ABC.

“…the probability of generating simulated data exactly matching the observed data is zero.”

The section on the standard ABC algorithms mentions the fact that ABC MCMC can be (re-)interpreted as a pseudo-marginal MCMC, albeit one targeting the ABC posterior instead of the original posterior. The starting point of the paper is the above quote, which echoes a conversation I had with Gabriel Stolz a few weeks ago, when he presented me his free energy method and when I could not see how to connect it with ABC, because having an exact match seemed to cancel the appeal of ABC, all parameter simulations then producing an exact match under the right constraint. However, the paper maintains this can be done, by looking at the joint distribution of the parameters, latent variables, and observables. Under the implicit restriction imposed by keeping the observables constant. Which defines a manifold. The mathematical validation is achieved by designing the density over this manifold, which looks like

p(u)\left|\frac{\partial g^0}{\partial u}\frac{\partial g^0}{\partial u}^\text{T}\right|^{-\textonehalf}

if the constraint can be rewritten as g⁰(u)=0. (This actually follows from a 2013 paper by Diaconis, Holmes, and Shahshahani.) In the paper, the simulation is conducted by Hamiltonian Monte Carlo (HMC), the leapfrog steps consisting of an unconstrained move followed by a projection onto the manifold. This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step. I also find it surprising that this projection step does not jeopardise the stationary distribution of the process, as the argument found therein about the approximation of the approximation is not particularly deep. But the main thing that remains unclear to me after reading the paper is how the constraint that the pseudo-data be equal to the observable data can be turned into a closed form condition like g⁰(u)=0. As mentioned above, the authors assume a generative model based on uniform (or other simple) random inputs but this representation seems impossible to achieve in reasonably complex settings.

simulation under zero measure constraints [a reply]

Posted in Books, pictures, Statistics, University life with tags , , , , , on November 21, 2016 by xi'an

grahamFollowing my post of last Friday on simulating over zero measure sets, as, e.g., producing a sample with a given maximum likelihood estimator, Dennis Prangle pointed out the recent paper on the topic by Graham and Storkey, and a wee bit later, Matt Graham himself wrote an answer to my X Validated question detailing the resolution of the MLE problem for a Student’s t sample. Including the undoubtedly awesome picture of a 3 observation sample distribution over a non-linear manifold in R³. When reading this description I was then reminded of a discussion I had a few months ago with Gabriel Stolz about his free energy approach that managed the same goal through a Langevin process. Including the book Free Energy Computations he wrote in 2010 with Tony Lelièvre and Mathias Rousset. I now have to dig deeper in these papers, but in the meanwhile let me point out that there is a bounty of 200 points running on this X Validated question for another three days. Offered by Glen B., the #1 contributor to X Validated question for all times.

An objective prior that unifies objective Bayes and information-based inference

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , on June 8, 2015 by xi'an

vale9During the Valencia O’Bayes 2015 meeting, Colin LaMont and Paul Wiggins arxived a paper entitled “An objective prior that unifies objective Bayes and information-based inference”. It would have been interesting to have the authors in Valencia, as they make bold claims about their w-prior as being uniformly and maximally uninformative. Plus achieving this unification advertised in the title of the paper. Meaning that the free energy (log transform of the inverse evidence) is the Akaike information criterion.

The paper starts by defining a true prior distribution (presumably in analogy with the true value of the parameter?) and generalised posterior distributions as associated with any arbitrary prior. (Some notations are imprecise, check (3) with the wrong denominator or the predictivity that is supposed to cover N new observations on p.2…) It then introduces a discretisation by considering all models within a certain Kullback divergence δ to be undistinguishable. (A definition that does not account for the assymmetry of the Kullback divergence.) From there, it most surprisingly [given the above discretisation] derives a density on the whole parameter space

\pi(\theta) \propto \text{det} I(\theta)^{1/2} (N/2\pi \delta)^{K/2}

where N is the number of observations and K the dimension of θ. Dimension which may vary. The dependence on N of the above is a result of using the predictive on N points instead of one. The w-prior is however defined differently: “as the density of indistinguishable models such that the multiplicity is unity for all true models”. Where the log transform of the multiplicity is the expected log marginal likelihood minus the expected log predictive [all expectations under the sampling distributions, conditional on θ]. Rather puzzling in that it involves the “true” value of the parameter—another notational imprecision, since it has to hold for all θ’s—as well as possibly improper priors. When the prior is improper, the log-multiplicity is a difference of two terms such that the first term depends on the constant used with the improper prior, while the second one does not…  Unless the multiplicity constraint also determines the normalising constant?! But this does not seem to be the case when considering the following section on normalising the w-prior. Mentioning a “cutoff” for the integration that seems to pop out of nowhere. Curiouser and curiouser. Due to this unclear handling of infinite mass priors, and since the claimed properties of uniform and maximal uninformativeness are not established in any formal way, and since the existence of a non-asymptotic solution to the multiplicity equation is neither demonstrated, I quickly lost interest in the paper. Which does not contain any worked out example. Read at your own risk!

computational methods for statistical mechanics [day #2]

Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , on June 5, 2014 by xi'an

Arthur Seat, Edinburgh, Sep. 7, 2011

The last “tutorial” talk at this ICMS workshop [“at the interface between mathematical statistics and molecular simulation”] was given by Tony Lelièvre on adaptive bias schemes in Langevin algorithms and on the parallel replica algorithm. This was both very interesting because of the potential for connections with my “brand” of MCMC techniques and rather frustrating as I felt the intuition behind the physical concepts like free energy and metastability was almost within my reach! The most manageable time in Tony’s talk was the illustration of the concepts through a mixture posterior example. Example that I need to (re)read further to grasp the general idea. (And maybe the book on Free Energy Computations Tony wrote with Mathias Rousset et Gabriel Stoltz.) A definitely worthwhile talk that I hope will get posted on line by ICMS. The other talks of the day were mostly of a free energy nature, some using optimised bias in the Langevin diffusion (except for Pierre Jacob who presented his non-negative unbiased estimation impossibility result).

València 9 snapshot [4]

Posted in Statistics, University life with tags , , , , , , , , on June 8, 2010 by xi'an

This one-before-last day at València 9 was fairly busy and I skipped the [tantalising] trip back to Sella to attend morning and afternoon talks. The first session involved Nicolas Chopin and Pierre Jacob’s free-energy paper whose earlier version I had heard at CREST, which builds on the earlier paper of Nicolas with Tony Lelièvre and Gabriel Stoltz to build a sequential Monte Carlo sampler that is biased along a preferential direction in order to fight multimodality and label switching in the case of mixtures. Peter Green rightly pointed out the difficulty in building this direction, which appears like a principal component to me, but this may open a new direction for research on a potentially adaptive direction updated with the SMC sampler… Although I always have trouble understanding the gist of causal models, Thomas Richardson’s talk about transparent parameterisation was quite interesting  in its links both with contingency tables and with identifiability issues (should Bayesians care about identifiability?! I did not really understand why the data could help in specifying the unidentified parameter in an empirical Bayes manner, though).

The morning talk by Darren Wilkinson was a particularly enticing talk in that Darren presented in a very articulate manner the specifics of analysing stochastic kinetic models for bacterial regulation and that he also introduced a likelihood-free MCMC that was not ABC-MCMC. (At first sight, it sounds like the auxiliary variable technique of Møller, Pettit, Reeves and Berthelsen, but I want to read the paper to understand better the differences.) Despite the appalling audio and video rendering in the conference room, the filmed discussion by Samuel Kou got into a comparison with ABC. The afternoon non-parametric session left me a bit confused as to the infinite regress on Dirichlet process expansions, but I enjoyed the next talk by Geoff Nicholls on partial ordering inference immensely, even though I missed the bishop example at the beginning because the talks got drifted due to the absence of the first speaker of the session. During the poster session (where again I only saw a fourth of the material!), I had the pleasant surprise to meet a student from the University of Canterbury, Christchurch, who took my Bayesian Core class when I visited in 2006.

%d bloggers like this: