Archive for Monte Carlo Statistical Methods

MCqMC 2020 live and free and online

Posted in pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , , on July 27, 2020 by xi'an

The MCqMC 20202 conference that was supposed to take place in Oxford next 9-14 August has been turned into an on-line free conference since travelling remains a challenge for most of us. Tutorials and plenaries will be live with questions  on Zoom, with live-streaming and recorded copies on YouTube. They will probably be during 14:00-17:00 UK time (GMT+1),  15:00-18:00 CET (GMT+2), and 9:00-12:00 ET. (Which will prove a wee bit of a challenge for West Coast and most of Asia and Australasia researchers, which is why our One World IMS-Bernoulli conference we asked plenary speakers to duplicate their talks.) All other talks will be pre-recorded by contributors and uploaded to a website, with an online Q&A discussion section for each. As a reminder here are the tutorials and plenaries:

Invited plenary speakers:

Aguêmon Yves Atchadé (Boston University)
Jing Dong (Columbia University)
Pierre L’Écuyer (Université de Montréal)
Mark Jerrum (Queen Mary University London)
Peter Kritzer (RICAM Linz)
Thomas Muller (NVIDIA)
David Pfau (Google DeepMind)
Claudia Schillings (University of Mannheim)
Mario Ullrich (JKU Linz)

Tutorials:

Fred Hickernell (IIT) — Software for Quasi-Monte Carlo Methods
Aretha Teckentrup (Edinburgh) — Markov chain Monte Carlo methods

deterministic moves in Metropolis-Hastings

Posted in Books, Kids, R, Statistics with tags , , , , , , , , on July 10, 2020 by xi'an

A curio on X validated where an hybrid Metropolis-Hastings scheme involves a deterministic transform, once in a while. The idea is to flip the sample from one mode, ν, towards the other mode, μ, with a symmetry of the kind

μ-α(x+μ) and ν-α(x+ν)

with α a positive coefficient. Or the reciprocal,

-μ+(μ-x)/α and -ν+(ν-x)/α

for… reversibility reasons. In that case, the acceptance probability is simply the Jacobian of the transform to the proposal, just as in reversible jump MCMC.

Why the (annoying) Jacobian? As explained in the above slides (and other references), the Jacobian is there to account for the change of measure induced by the transform.

Returning to the curio, the originator of the question had spotted some discrepancy between the target and the MCMC sample, as the moments did not fit well enough. For a similar toy model, a balanced Normal mixture, and an artificial flip consisting of

x’=±1-x/2 or x’=±2-2x

implemented by

  u=runif(5)
  if(u[1]<.5){
    mhp=mh[t-1]+2*u[2]-1
    mh[t]=ifelse(u[3]<gnorm(mhp)/gnorm(mh[t-1]),mhp,mh[t-1])
  }else{
    dx=1+(u[4]<.5)
    mhp=ifelse(dx==1,
               ifelse(mh[t-1]<0,1,-1)-mh[t-1]/2,
               2*ifelse(mh[t-1]<0,-1,1)-2*mh[t-1])
    mh[t]=ifelse(u[5]<dx*gnorm(mhp)/gnorm(mh[t-1])/(3-dx),mhp,mh[t-1])

I could not spot said discrepancy beyond Monte Carlo variability.

an independent sampler that maximizes the acceptance rate of the MH algorithm

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , , , on September 3, 2019 by xi'an

An ICLR 2019 paper by Neklyudov, Egorov and Vetrov on an optimal choice of the proposal in an independent Metropolis algorithm I discovered via an X validated question. Namely whether or not the expected Metropolis-Hastings acceptance ratio is always one (which it is not when the support of the proposal is restricted). The paper mentions the domination of the Accept-Reject algorithm by the associated independent Metropolis-Hastings algorithm, which has actually been stated in our Monte Carlo Statistical Methods (1999, Lemma 6.3.2) and may prove even older. The authors also note that the expected acceptance probability is equal to one minus the total variation distance between the joint defined as target x Metropolis-Hastings proposal distribution and its time-reversed version. Which seems to suffer from the same difficulty as the one mentioned in the X validated question. Namely that it only holds when the support of the Metropolis-Hastings proposal is at least the support of the target (or else when the support of the joint defined as target x Metropolis-Hastings proposal distribution is somewhat symmetric. Replacing total variation with Kullback-Leibler then leads to a manageable optimisation target if the proposal is a parameterised independent distribution. With a GAN version when the proposal is not explicitly available. I find it rather strange that one still seeks independent proposals for running Metropolis-Hastings algorithms as the result will depend on the family of proposals considered and as performances will deteriorate with dimension (the authors mention a 10% acceptance rate, which sounds quite low). [As an aside, ICLR 2020 will take part in Addis Abeba next April.]

off to SimStat2019, Salzburg

Posted in Mountains, Running, Statistics, University life with tags , , , , , , , , , , , , , on September 2, 2019 by xi'an

Today, I am off to Salzburg for the SimStat 2019 workshop, or more formally the 10th International Workshop on Simulation and Statistics, where I give a talk on ABC. The program of the workshop is quite diverse and rich and so I do not think I will have time to take advantage of the Hohe Tauern or the Berchtesgaden Alps to go climbing. Especially since I am also discussing papers in an ABC session.

efficient MCMC sampling

Posted in Statistics with tags , , , on June 24, 2019 by xi'an

Maxime Vono, Daniel Paulin and Arnaud Doucet recently arXived a paper about a regularisation technique that allows for efficient sampling from a complex posterior which potential function factorises as a large sum of transforms of linear projections of the parameter θ

U(\theta)=\sum_i U_i(A_i\theta)

The central idea in the paper [which was new to me] is to introduce auxiliary variates for the different terms in the sum, replacing the projections in the transforms, with an additional regularisation forcing these auxiliary variates to be as close as possible from the corresponding projection

U(\theta,\mathbf z)=\sum_i U_i(z_i)+\varrho^{-1}||z_i-A_i\theta||^2

This is only an approximation to the true target but it enjoys the possibility to run a massive Gibbs sampler in quite a reduced dimension. As the variance ρ of the regularisation term goes to zero the marginal posterior on the parameter θ converges to the true posterior. The authors manage to achieve precise convergence rates both in total variation and in Wasserstein distance.

From a practical point of view, only judging from the logistic example, it is hard to fathom how much this approach improves upon other approaches (provided they still apply) as the impact of the value of ρ should be assessed on top of the convergence of the high-dimensional Gibbs sampler. Or is there an annealing version in the pipe-line? While parallelisation is a major argument, it also seems that the Gibbs sampler need a central monitoring for each new simulation of θ. Unless some asynchronous version can be implemented.