## bounded target support [#2]

**I**n a sort of echo from an earlier post, I received this emailed question from Gabriel:

I am contacting you in connection with my internship and your book «Le choix bayésien» where I cannot find an answer to my question. Given a constrained parameter space and an unconstrained Markov chain, is it correct to subsample the chain in order to keep only those points that satisfy the constraint?

**T**o which I replied that this would induce ~~a bias~~ in the outcome, even though this is a marginally valid argument (if the Markov chain is in its stationary regime, picking one value at random from those satisfying the constraint is akin to accept-reject). The unbiased approach is to resort to Metropolis-Hastings steps in Gabriel’s Gibbs sampler to check whether or not each proposed move stays within the constrained space. If not, one need to replicate the current value of the chain…

**Update: F**ollowing comments by Ajay and David, I withdraw the term “bias”. The method works as a low key accept-reject but can be very inefficient, to the extent of having no visit to the constrained set. (However, in the event of a practically/numerically disconnected support with a huge gap between the connected components, it may also be more efficient than a low energy Metropolis-Hastings algorithm. Mileage may vary!)

January 10, 2019 at 1:53 pm

Hi Prof. Robert:

As you mentioned in the post:

“it may also be more efficient than a low energy Metropolis-Hastings algorithm”.

Actually, I did come across this kind of test results.

But, I very confused and curious about the theoretical basis for it being more efficient?

Could you help me about this?

Best.

Thanks.

January 10, 2019 at 2:21 pm

Imposing the constraint by a specifically designed proposal may slow down the exploration of the support of the distribution in some cases, while a broader support for the proposal has a higher potential to reach faraway regions in particular when the support is not connected. I do not have a particular theoretical result in mind connected with this intuition.

January 10, 2019 at 5:33 pm

Very helpful. Thanks a lot!

December 24, 2018 at 3:25 pm

Dear Prof. Robert:

Thanks for this post, which is very helpful to me.

I study MCMC rendering algorithms in computer graphics.

I met the situation that excluding the samples, which are impossible values of the stationary of Markov Chain, makes algorithms more efficient.

Considering that the probability of proposals that are impossible values (I mean, out of the bounded target support) is over 50% in MCMC rendering algorithms, which causes rendering inefficient, I believe that something needs to be done to counteract this situation, so that we can improve rendering efficiency.

But, you said that “there is no mathematical reason for doing so!”

I don’t quite understand what exactly the “mathematical reason” is.

I wonder if there is anything can be done to provide remedies for impossible values/proposals?

By the way, sample space of rendering algorithm is infinite-dimension.

Thanks.

Best.

Libing Zeng

December 24, 2018 at 5:16 pm

I wrote mathematical when meaning that proposing values outside the support of the target does not invalidate the Markov chain as converging to the true target. No mention is made of efficiency in this statement. If a manageable proposal restricted to the exact support is available and improves convergence there is no reason not to use it.

December 24, 2018 at 5:59 pm

Thanks for your prompt reply.

Yes, you’re right.

It does converge to the true target.

One of the major research directions of MCMC rendering algorithm is to find better proposal strategies, so that to reduce the percentage of proposals outside the support of the target.

Though lots of efforts have been put, the percentage is still larger than 50%.

So, If we can find some variations of M-H sampling to reduce the negative impact on efficiency, things could be better.

I know M-H with delayed rejection does help.

From the perspective of efficiency, at least for MCMC rendering algorithms, if we can do anything as remedies for the bad proposals?

Best.

July 8, 2013 at 4:31 am

Dear Christian,

Thanks for this post, very interesting way to start the day for me!

Although this is not quite the answer to the question (and perhaps exactly what you say at the begining of your answer – apologies if it is), you could base estimates on the basis of the samples in the right set (lets call it A).

If one adopts the estimate, for a one-dimensional test function and N MCMC samples:

Under minimal assumptions and for an ergodic chain (for a probability ), this would converge to

Perhaps this is the point you are making at the beginning of your response, however.

This is not quite the question, but, if one wanted just to use the sub-samples for estimating expectations w.r.t. you could do so in an asymptotically consistent sense. This `idea’ however, does not say anything about the variance of the estimate or if its as good as the approach you suggest!

Thanks,

Ajay

July 8, 2013 at 3:46 am

I don’t see where this induces a bias. Are you saying that if a chain produces samples from f defined on S and T is a subset of S, then those samples restricted to T are not unbiased on f restricted to T?

July 8, 2013 at 8:57 am

Take as a target the standard normal distribution restricted to [3,4]: a regular MCMC will not visit this interval unless the target is also restricted. This is not a bias, granted, but the algorithm does not work.

July 8, 2013 at 10:59 pm

Right, I agree with that, it can be very inefficient especially when the target is in the tails of the original distribution.

But it can be a sensible approach if one is unable or unwilling to change the code for the original sampler and the target has reasonable coverage.