What’s AMIS?

AMIS stands for adaptive mixture importance sampling and, with Jean-Marie Cornuet, Antonietta Mira and Jean-Michel Marin, we have just posted the paper on arXiv, as well as submitted it to JASA. This new paper is relnated to a series of papers on adaptive mixture (PMC) algorithms we wrote over the past years, as for instance this version to appear in Statistics and Computing, with a novel feature which is to use Owen and Zhou’s (2000, JASA) stabilisation mixture technique on the past and present importance weights, at each iteration of this iterative algorithm. Describing the specifics of this update of the weights is a bit delicate to do on this post, so I refer interested readers to the paper, but the stabilisation gains provided by this technique are really appreciable.

Just to make the above understandable, the AMIS method starts with a

x_{1i}\sim g_0(x)

sample, generated from an importance function, while aiming at the target density\pi. Regular importance weights are then

\pi(x_{1i})/g_0(x_{1i}).

The second step of the AMIS algorithm then builds a second importance functiong_1that depends on the first sample, generates a new sample

x_{2i}\sim g_1(x)

and derives the importance weights

\dfrac{\pi(x_{1i})}{g_0(x_{1i})+g_1(x_{1i})}\ \text{and}\ \dfrac{\pi(x_{1i})}{g_0(x_{2i})+g_1(x_{2i})}

for those two samples. The major feature is therefore that it updates the weights of all past iterations and not only of the current iteration.

Since some readers may have seen preliminary versions of this work given as early as 2007 in Salt Lake City, the main reason for this lengthy production is that we tried many venues to overcome the block of proving a complete convergence result for this method. The backward update of the weights plus the adaptive construction of the importance functions create a loophole in the probabilistic spaces involved, loophole which prevents a convergence result at the same level as our earlier results. So the present version contains partial convergence results in the general case and complete convergence results in special cases, like distributions with compact supports. This is not fully satisfactory, but we bothered so many persons with questions on how to overcome this difficulty that we are now convinced there is no easy solution!

2 Responses to “What’s AMIS?”

  1. [...] made requests about the theoretical validation of the AMIS algorithm, sending us back to the earliest version of the paper where we had described our endeavours and lack of success to completely circumvent the [...]

  2. [...] attempts at improving our understanding of AMIS convergence, we have now resubmitted the AMIS paper to Scandinavian Journal of Statistics and arXived the new version as well. (I remind the reader [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 667 other followers