Perhaps I can add a footnote on our more recent developments? Decent progress has been made on better understanding the convergence rates of these estimators, and there is a pre-print of this new work available [1]. To spoil the plot of the new paper, we can show that convergence rates are governed by the (minimum of the) number of derivatives of the target density and the number of derivatives of the integrand. This means that problems which are smooth (in both of these senses) admit more rapid estimators – as you might perhaps expect from analogy with quasi Monte Carlo methods.

The more I think about these methods the closer they become to “kernel quadrature”; albeit the function space is rather non-standard. Nevertheless, these methods should be interpretable in terms of integral operators and eigen-decompositions, just like kernel quadrature (an example of that being Bayesian quadrature). That is the sort of direction I would like to move in with this work, going forward!

[1] Oates CJ, Cockayne J, Briol F-X, Girolami M. Convergence Rates for a Class of Estimators Based on Stein’s Identity. http://arxiv.org/abs/1603.03220

]]>