I do think that you should publish more about this subject matter, it might

not be a taboo subject but generally folks don’t talk about such subjects.

To the next! Kind regards!!

]]>About the comparison between MLE and ABC estimator, I think their similarity comes from the convergence order of the Monte Carlo variance and the negligibility of the ABC bias. When $\varepsilon=O(n^-1/2)$ and the importance function is sensible, the Monte Carlo variance is only $K/N$, where $K$ is a constant, times larger than the variance of MLE and the bias from $\varepsilon$ is negligible due to the weaker requirement of $\varepsilon=o(n^-1/4)$. Therefore as the data size gets larger, the difference between the mean square errors of ABC and MLE can be made arbitrarily small by a large but fixed $N$.

I guess the reason behind the intuition that the Monte Carlo error would explode with a fixed N is that when the data size is large, either the acceptance probability is small, when sampling from the prior, or the importance weight is skewed, when sampling around the true value. The class of ‘sensible’ proposal distribution somehow balances these two, getting the acceptance probability away from 0 and the variance of the importance weight under controlled.

]]> “Something I have (slight) trouble with is the construction of an importance sampling function of the fABC(s|θ)^{α} when, obviously, this function cannot be used for simulation purposes. The authors point out this fact, but still build an argument about the optimal choice of α, namely away from 0 and 1, like ½”

I guess one key result for us is that the Monte Carlo error is well-behaved (in the sense that even if N is fixed, it does not dominate the sampling variability in the estimator) for IS-ABC if you choose a sensible proposal. This was surprising at first (as we initially expected that the acceptance probability would always get smaller if you had more data — a message that comes across in some of the recent work on ABC, such as optimisation Monte Carlo, which suggest that acceptance probabilities in ABC will be small for big data applications).

The key message from the above result was the class of “sensible” proposal distributions is wide (if you work with fABC(s|θ)^{α} then any alpha in (0,1) is sensible).