the Poisson transform

In obvious connection with an earlier post on the “estimation” of normalising constants, Simon Barthelmé and Nicolas Chopin just arXived a paper on The Poisson transform for unormalised statistical models. Obvious connection because I heard of the Guttmann and Hyvärinen (2012) paper when Simon came to CREST to give a BiP talk on this paper a few weeks ago. (A connected talk he gave in Banff is available as a BIRS video.)

Without getting too much into details, the neat idea therein is to turn the observed likelihood

\sum_{i=1}^n f(x_i|\theta) - n \log \int \exp f(x|\theta) \text{d}x

into a joint likelihood

\sum_{i=1}^n[f(x_i|\theta)+\nu]-n\int\exp[f(x|\theta)+\nu]\text{d}x

 which is the likelihood of a Poisson point process with intensity function

\exp\{ f(x|\theta) + \nu +\log n\}

This is an alternative model in that the original likelihood does not appear as a marginal of the above. Only the modes coincide, with the conditional mode in ν providing the normalising constant. In practice, the above Poisson process likelihood is unavailable and Guttmann and Hyvärinen (2012) offer an approximation by means of their logistic regression.

Unavailable likelihoods inevitably make me think of ABC. Would ABC solutions be of interest there? In particular, could the Poisson point process be simulated with no further approximation? Since the “true” likelihood is not preserved by this representation, similar questions to those found in ABC arise, like a measure of departure from the “true” posterior. Looking forward the Bayesian version! (Marginalia: Siméon Poisson died in Sceaux, which seemed to have attracted many mathematicians at the time, since Cauchy also spent part of his life there…)

One Response to “the Poisson transform”

  1. Dan Simpson Says:

    It’s pretty easy to simulate from a Poisson process, so that bit’s fine. But it’s probably going to be too high-dimensional for ABC (I’m not 100% on the dimension of \nu here).

    But, I’m not sure why you’d bother. The “approximate likelihood” is pretty easy to compute (I wouldn’t use logistic regression – it’s a low dimensional integral: use Gauss quadrature) and the resulting posterior satisfies pretty straightforward perturbation bounds w.r.t. this integration error. (This is even true for things like log-Gaussian Cox processes)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: