Archive for ARMS algorithm

MCqMC 2014 [day #3]

Posted in pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , on April 10, 2014 by xi'an

Leuven2

As the second day at MCqMC 2014, was mostly on multi-level Monte Carlo and quasi-Monte Carlo methods, I did not attend many talks but had a long run in the countryside (even saw a pheasant and a heron), worked at “home” on pressing recruiting evaluations and had a long working session with Pierre Jacob. Plus an evening out sampling (just) a few Belgian beers in the shade of the city hall…

Today was more in my ballpark as there were MCMC talks the whole day! The plenary talk was not about MCMC as Erich Novak presented a survey on the many available results bounding the complexity of approximating an integral based on a fixed number of evaluations of the integrand, some involving the dimension (and its curse), some not, some as fast as √n and some not as fast, all this depending on the regularity and the size of the classes of integrands considered. In some cases, the solution was importance sampling, in other cases, quasi-Monte Carlo, and yet other cases were still unsolved. Then Yves Atchadé gave a new perspective on computing the asymptotic variance in the central limit theorem on Markov chains when truncating the autocovariance, Matti Vihola talked about theoretical orderings of Markov chains that transmuted into the very practical consequence that using more simulations in a pseudo-marginal likelihood approximation improved acceptance rate and asymptotic variances (and this applies to aBC-MCMC as well), Radu Craiu proposed a novel processing of adaptive MCMC by treating various approximations to the true target as food for a multiple-try Metropolis algorithm, and Luca Martino had a go at resuscitating the ARMS algorithm of Gilks and Wild (used for a while in BUGS), although the talk did not dissipate all of my misgivings about the multidimensional version! I had more difficulties following the “Warwick session” which was made of four talks by current or former students from Warwick, although I appreciated the complexity of the results in infinite dimensional settings and novel approximations to diffusion based Metropolis algorithms. No further session this afternoon as the “social” activity was to visit the nearby Stella Artois brewery! This activity made us very social, for certain, even though there was hardly a soul around in this massively automated factory. (Maybe an ‘Og post to come one of those days…)

sticky Metropolis

Posted in Statistics, University life with tags , , , , , , on September 6, 2013 by xi'an

My former student Roberto Casarin and his colleagues wrote (and arXived) a paper entitled Adaptive sticky generalized Metropolis algorithm. The basic idea is to use some of the rejected and past values of the chain to build an adaptive proposal, the criterion for choosing those values being related with the distance at the rejected point between the target and the proposal. In a sense, it gives a reward to surprising points, i.e. points where the proposal does poorly in approximating the target. On top of this, they include a multiple-try strategy where several values are generated from the current proposal and one of them is selected, to be accepted or rejected in a Metropolis step. The learning set may include several of the proposed (and rejected) values. This paper generalises Holden, Hauge and Holden (AoAP, 2009) and extends their proof of stationarity. The authors explore at length (the paper is 63 pages long!) the construction of the adaptive proposal distribution. This construction appears to be quite similar to Gilks’ and Wild’s (1993) ARMS algorithm. Hence, unless I missed a generalisation, it seems to me that the solutions are restricted to unidimensional settings. For instance, the authors propose to implement their algorithm for each complex conditional in a Gibbs sampler, meaning starting from scratch and running a large enough number of iterations to “reach” convergence. I also wonder at the correspondence between this construction and the original assumption of a minorisation condition wrt the target density in the event of an unbounded support. While this paper represents an interesting extension of the automated simulation algorithms of the ARMS type, and while the method is investigated thoroughly by several simulation experiments (in the second half of the paper), I remain somehow circumspect at the possibly of using ASMTM in complex high-dimensional problems as the learning cost soar with the dimension.