As you mentioned in the post:

“it may also be more efficient than a low energy Metropolis-Hastings algorithm”.

Actually, I did come across this kind of test results.

But, I very confused and curious about the theoretical basis for it being more efficient?

Could you help me about this?

Best.

Thanks.

Yes, you’re right.

It does converge to the true target.

One of the major research directions of MCMC rendering algorithm is to find better proposal strategies, so that to reduce the percentage of proposals outside the support of the target.

Though lots of efforts have been put, the percentage is still larger than 50%.

So, If we can find some variations of M-H sampling to reduce the negative impact on efficiency, things could be better.

I know M-H with delayed rejection does help.

From the perspective of efficiency, at least for MCMC rendering algorithms, if we can do anything as remedies for the bad proposals?

Best.

]]>Thanks for this post, which is very helpful to me.

I study MCMC rendering algorithms in computer graphics.

I met the situation that excluding the samples, which are impossible values of the stationary of Markov Chain, makes algorithms more efficient.

Considering that the probability of proposals that are impossible values (I mean, out of the bounded target support) is over 50% in MCMC rendering algorithms, which causes rendering inefficient, I believe that something needs to be done to counteract this situation, so that we can improve rendering efficiency.

But, you said that “there is no mathematical reason for doing so!”

I don’t quite understand what exactly the “mathematical reason” is.

I wonder if there is anything can be done to provide remedies for impossible values/proposals?

By the way, sample space of rendering algorithm is infinite-dimension.

Thanks.

Best.

Libing Zeng

But it can be a sensible approach if one is unable or unwilling to change the code for the original sampler and the target has reasonable coverage.

]]>Thanks for this post, very interesting way to start the day for me!

Although this is not quite the answer to the question (and perhaps exactly what you say at the begining of your answer – apologies if it is), you could base estimates on the basis of the samples in the right set (lets call it A).

If one adopts the estimate, for a one-dimensional test function and N MCMC samples:

Under minimal assumptions and for an ergodic chain (for a probability ), this would converge to

Perhaps this is the point you are making at the beginning of your response, however.

This is not quite the question, but, if one wanted just to use the sub-samples for estimating expectations w.r.t. you could do so in an asymptotically consistent sense. This `idea’ however, does not say anything about the variance of the estimate or if its as good as the approach you suggest!

Thanks,

Ajay

]]>