## pitfalls of nested Monte Carlo

**A** few days ago, Tom Rainforth, Robert Cornish, Hongseok Yang, and Frank Wood from Oxford have arXived a paper on the limitations of nested Monte Carlo. By nested Monte Carlo [not *nested sampling*], they mean Monte Carlo techniques used to evaluate the expectation of a non-linear transform of an expectation, which often call for plug-in resolution. The main result is that this expectation cannot be evaluated by an unbiased estimator. Which is only mildly surprising. I do wonder if there still exist series solutions à la Glynn and Rhee, as in the Russian roulette version. Which is mentioned in a footnote. Or specially tuned versions, as suggested by some techniques found in Devroye’s book where the expectation of the exponential of another expectation is considered… (The paper is quite short, which may be correlated with the format imposed by some machine-learning conference proceedings like AISTATS.)

December 19, 2016 at 11:36 am

“as suggested by some techniques found in Devroye’s book where the expectation of the exponential of another expectation is considered…” – May I ask where in Devroye’s bookI this is discussed?

December 19, 2016 at 2:27 pm

You may ask, but I am afraid the exact reference will have to wait for a fortnight as I have limited access to Internet and limited time as well. Check for Forsythe’s method.

December 19, 2016 at 2:43 pm

Ok! found it on page 123.