Arrogance sampling

A new posting on arXiv by Benedict Escoto on a simulation method for approximating normalising constants (i.e. evidence) with an eye-catching name! Here is the abstract

This paper describes a method for estimating the marginal likelihood or Bayes factors of Bayesian models using non-parametric importance sampling (“arrogance sampling”). This method can also be used to compute the normalizing constant of probability distributions. Because the required inputs are samples from the distribution to be normalized and the scaled density at those samples, this method may be a convenient replacement for the harmonic mean estimator. The method has been implemented in the open source R package margLikArrogance.

The crux of the arrogant sampling method is in using a non-parametric estimation of the target function, based on a preliminary simulation from the posterior distribution. The nonparametric estimate is entered in an harmonic mean representation we previously exploited in our HPD proposal for evidence approximation

$1\bigg/\frac{1}{T}\sum_t {\hat{\pi}(\theta_t)}\big/{p(\theta_t,x)}$

This estimate $\hat\pi$ is an histogram estimate that is smoothed by the knowledge of the joint density at points in the bins. Given that the support of the histogram is not restricted to be smaller than the support of $\pi(\theta|x)$, as in our proposal, there could be support problems since the bins of the histogram are data-dependent. The associated R package margLikArrogance seems to be able to construct the bins by itself, but I have not tested it. Overall, the method seems to be bound to suffer from the curse of dimensionality, unless the bin construction is aggressively reacting to empty bins. Since the paper contains no illustration whatsoever, it is difficult to tell… I would also like to see more details on the convergence results, including the CLT, on the arrogant approximation because $\hat\pi$ depends on the whole sample, thus does not benefit from standard importance convergence results.

3 Responses to “Arrogance sampling”

1. Emmanuel Charpentier Says:

Dear Pr Robert,

The link posted (twice) on your ‘Og entry points to somethin called “Bayesian Analysis of Loss Ratios Using the Reversible Jump Algorithm”
by Garfield Brown and Steve Brooks.
Could you please correct this ?

Sincerely yours,

• Thank you for pointing out the arXiv reference mistake. Presumably this was another paper I wanted to comment… The package reference was wrong too.

2. Ben Escoto Says:

Thanks for the discussion! Yep, definitely suffers from curse of dimensionality and you’re right the amount of error introduced by normalization wasn’t really covered. (Looked really complicated to deal with :-P)

Anyway, I was just looking for a simple way of computing marginal likelihoods for some models that weren’t too terribly difficult. Hopefully the package makes it really easy for practising statisticians to do the same, and the paper justifies the method to some extent.