Slices and crumbs [arXiv:1011.4722]

An interesting note was arXived a few days ago by Madeleine Thompson and Radford Neal. Beside the nice touch of mixing crumbs and slices, the neat idea is to have multiple-try proposals for simulating within a slice and to decrease the dimension of the simulation space at each try. This dimension diminution is achieved via the construction of an orthogonal basis based on the gradient of the log densities at previously-rejected proposals.

\mathbf{J}=(\nabla\log f(x_1),\ldots,\nabla\log f(x_k))

until all dimensions are exhausted, in which case the scale of the Gaussian proposal is reduced. (The paper comes with R and C codes.) Provided the gradient can be computed (or at least approximated), this is a fairly general method (even though I have not tested it so cannot say how much calibration it requires). An interesting point is that, contrariwise to the delayed-rejection method of Antonietta Mira and co-authors,  the repeated proposals do not induce a complexification in the slice acceptance probability. I am less convinced by the authors’ conclusion that the method compares with adaptive Metropolis techniques, in the sense the “shrinking rank” method forgets about past experiences as it starts from scratch at each iteration: it is thus not really learning… (Now, in terms of performances, this may be the case!)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 598 other followers