## training energy based models

**T**his recent arXival by Song and Kingma covers different computational approaches to semi-parametric estimation, but also exposes imho the chasm existing between statistical and machine learning perspectives on the problem.

“Energy-based models are much less restrictive in functional form: instead of specifying a normalized probability, they only specify the unnormalized negative log-probability (…) Since the energy function does not need to integrate to one, it can be parameterized with any nonlinear regression function.”

The above in the introduction appears first as a strange argument, since the mass one constraint is the least of the problems when addressing non-parametric density estimation. Problems like the convergence, the speed of convergence, the computational cost and the overall integrability of the estimator. It seems however that the restriction or lack thereof is to be understood as the ability to use much more elaborate forms of densities, which are then black-boxes whose components have little relevance… When using such mega-over-parameterised representations of densities, such as neural networks and normalising flows, a statistical assessment leads to highly challenging questions. But convergence (in the sample size) does not appear to be a concern for the paper. (Except for a citation of Hyvärinen on p.5.)

Using MLE in this context appears to be questionable, though, since the base parameter θ is not unlikely to remain identifiable. Computing the MLE is therefore a minor issue, in this regard, a resolution based on simulated gradients being well-chartered from the earlier era of stochastic optimisation as in Robbins & Monro (1954), Duflo (1996) or Benveniste & al. (1990). (The log-gradient of the normalising constant being estimated by the opposite of the gradient of the energy at a random point.)

“Running MCMC till convergence to obtain a sample x∼p(x) can be computationally expensive.”

Contrastive divergence à la Hinton (2002) is presented as a solution to the convergence problem by stopping early, which seems reasonable given the random gradient is mostly noise. With a possible correction for bias *à la* Jacob & al. (missing the published version).

An alternative to MLE is the 2005 Hyvärinen score, notorious for bypassing the normalising constant. But blamed in the paper for being costly in the dimension d of the variate x, due to the second derivative matrix. Which can be avoided by using Stein’s unbiased estimator of the risk (yay!) if using randomized data. And surprisingly linked with contrastive divergence as well, if a Taylor expansion is good enough an approximation! An interesting byproduct of the discussion on score matching is to turn it into an unintended form of ABC!

“Many methods have been proposed to automatically tune the noise distribution, such as Adversarial Contrastive Estimation (Bose et al., 2018), Conditional NCE (Ceylan and Gutmann, 2018) and Flow Contrastive Estimation (Gao et al., 2020).”

A third approach is the noise contrastive estimation method of Gutmann & Hyvärinen (2010) that connects with both others. And is a precursor of GAN methods, mentioned at the end of the paper via a (sort of) variational inequality.

April 10, 2021 at 7:32 pm

Any thoughts about using general optimization techniques on the weighted likelihood function, that is, with prior, in order to identify heavy parts of the posterior? I’m thinking in general of the techniques described in the J. C. Spall text,

Introduction to Stochastic Search and Optimization(2003) and in particular thinking on SPSA (pp 176-207). Intriguing to me there was the technical argument that U-shaped (and similar) probability densities for search were essential for assuring gradient estimates are bounded. This led to wondering what that kind of condition might imply about instrumental distributions for MCMC.April 9, 2021 at 11:02 am

In a recent preprint (https://arxiv.org/abs/2012.10903) we explored using Hyvärinen’s score matching for fitting energy based models with an exponential family structure, in a likelihood-free inference setting. That also resulted in useful summary statistics for subsequent ABC step.

Also, score matching can be speed using a sliced version as well (https://arxiv.org/abs/1905.07088)

April 7, 2021 at 9:34 pm

One topic in this area about which I am always curious: for parametric models, by using a procedure other than maximum likelihood, you typically pay a price in terms of your estimator having worse asymptotic variance, and in low-dimensional settings, this might just mean that you pick up a bearable constant factor. For models which are { high-dimensional / (essentially) non-parametric }, the gap could somehow be even worse. I don’t know of too many concrete results to this effect though.