Archive for log-concave functions

ARS: when to update?

Posted in Books, Kids, Statistics, University life with tags , , , , , on May 25, 2017 by xi'an

An email I got today from Heng Zhou wondered about the validity of the above form of the ARS algorithm. As printed in our book Monte Carlo Statistical Methods. The worry is that in the original version of the algorithm the envelope of the log-concave target f(.) is only updated for rejected values. My reply to the question is that there is no difference in the versions towards returning a value simulated from f, since changing the envelope between simulations does not modify the accept-reject nature of the algorithm. There is no issue of dependence between the simulations of this adaptive accept-reject method, all simulations remain independent. The question is rather one about efficiency, namely does it pay to update the envelope(s) when accepting a new value and I think it does because the costly part is the computation of f(x), rather than the call to the piecewise-exponential envelope. Correct me if I am wrong!

sampling from time-varying log-concave distributions

Posted in Statistics, University life with tags , , , , , on October 2, 2013 by xi'an

Philadelphia downtown from Ben Franklin bridge, Oct. 31, 2020Sasha Rakhlin from Wharton sent me this paper he wrote (and arXived) with Hariharan Narayanan on a specific Markov chain algorithm that handles sequential Monte Carlo problems for log-concave targets. By relying on novel (by my standards) mathematical techniques, they manage to obtain geometric ergodicity results for random-walk based algorithms and log-concave targets. One of the new tools is the notion of self-concordant barrier, a sort of convex potential function F associated with a reference convex set and with Lipschitz properties. The second tool is a Gaussian distribution based on the metric induced by F. The third is the Dikin walk Markov chain, which uses this Gaussian as proposal and moves almost like the Metropolis-Hastings algorithm, except that it rejects with at least a probability of ½. The scale (or step size) of the Gaussian proposal is determined by the regularity of the log-concave target. In that setting, the total variation distance between the target at the t-th level and the distribution of the Markov chain can be fairly precisely approximated. Which leads in turn to a scaling of the number of random walk steps that are necessary to ensure convergence. Depending on the pace of the moving target, a single step of the random walk may be sufficient, which is quite an interesting feature.

%d bloggers like this: