Archive for ratio of integrals

bandits for doubly intractable posteriors

Posted in Statistics with tags , , , , , , , , on April 17, 2019 by xi'an

Last Friday, Guanyang Wang arXived a paper on the use of multi-armed bandits (hence the reference to the three bandits) to handle intractable normalising constants. The bandit compares or mixes Møller et al. (2006) auxiliary variable solution with Murray et al. (2006) exchange algorithm. Which are both special cases of pseudo-marginal MCMC algorithms. In both cases, the auxiliary variables produce an unbiased estimator of the ratio of the constants. Rather than the ratio of two unbiased estimators as in the more standard pseudo-marginal MCMC. The current paper tries to compare the two approaches based on the variance of the ratio estimate, but cannot derive a general ordering. The multi-armed bandit algorithm exploits both estimators of the acceptance ratio to pick the one that is almost the largest, almost because there is a correction for validating the step by detailed balance. The bandit acceptance probability is the maximum [over the methods] of the minimum [over the time directions] of the original acceptance ratio. While this appears to be valid, note that the resulting algorithm implies four times as many auxiliary variates as the original ones, which makes me wonder at the gain when compared with a parallel implementation of these methods, coupled at random times. (The fundamental difficulty of simulating from likelihoods with an unknown normalising constant remains, see p.4.)

normalising constants of G-Wishart densities

Posted in Books, Statistics with tags , , , , , , on June 28, 2017 by xi'an

Abdolreza Mohammadi, Hélène Massam, and Gérard Letac arXived last week a paper on a new approximation of the ratio of two normalising constants associated with two G-Wishart densities associated with different graphs G. The G-Wishart is the generalisation of the Wishart distribution by Alberto Roverato to the case when some entries of the matrix are equal to zero, which locations are associated with the graph G. While enjoying the same shape as the Wishart density, this generalisation does not enjoy a closed form normalising constant. Which leads to an intractable ratio of normalising constants when doing Bayesian model selection across different graphs.

Atay-Kayis and Massam (2005) expressed the ratio as a ratio of two expectations, and the current paper shows that this leads to an approximation of the ratio of normalising constants for a graph G against the graph G augmented by the edge e, equal to

Γ(½{δ+d}) / 2 √π Γ(½{δ+d+1})

where δ is the degree of freedom of the G-Wishart and d is the number of minimal paths of length 2 linking the two end points of e. This is remarkably concise and provides a fast approximation. (The proof is quite involved, by comparison.) Which can then be used in reversible jump MCMC. The difficulty is obviously in evaluating the impact of the approximation on the target density, as there is no manageable available alternative to calibrate the approximation. In a simulation example where such an alternative is available, the error is negligible though.

Random construction of interpolating sets

Posted in Kids, Statistics, University life with tags , , , , on January 5, 2012 by xi'an

One of the many arXiv papers I could not discuss earlier is Huber and Schott’s “Random construction of interpolating sets for high dimensional integration” which relates to their earlier TPA paper at the València meeting. (Paper that we discussed with Nicolas Chopin.) TPA stands for tootsie pop algorithm, The paper is very pleasant to read, just like its predecessor. The principle behind TPA is that the number of steps in the algorithm is Poisson with parameter connected  to  the unknown measure of the inner set:

N\sim\mathcal{P}(\ln[\mu(B)/\mu(B^\prime)])

Therefore, the variance of the estimation is known as well.  This is a significant property of a mathematically elegant solution. As already argued in our earlier discussion, it however seems the paper is defending an integral approximation that sounds far from realistic, in my opinion. Indeed, the TPA method requires as a fundamental item the ability to simulate from a measure μ restricted to a level set A(β). Exact simulation seems close to impossible in any realistic problem. Just as in Skilling (2006)’s nested sampling. Furthermore, the comparison with nested sampling is evacuated rather summarily: that the variance of this alternative cannot be computed “prior to running the algorithm” does not mean it is larger than the one of the TPA method. If the proposal is to become a realistic algorithm, some degree of comparison with the existing should appear in the paper. (A further if minor comment about the introduction is that the reason for picking the relative ideal balance α=0.2031 in the embedded sets is not clear. Not that it really matters in the implementation unless Section 5 on well-balanced sets is connected with this ideal ratio…)