Here are the slides of my talk today on using Wasserstein distances as an intrinsic distance measure in ABC, as developed in our papers with Espen Bernton, Pierre Jacob, and Mathieu Gerber:
This morning, Gael Martin discussed the surprising aspects of ABC prediction, expanding upon her talk at ISBA, with several threads very much worth weaving in the ABC tapestry, one being that summary statistics need be used to increase the efficiency of the prediction, as well as more adapted measures of distance. Her talk also led me ponder about the myriad of possibilities available or not in the most generic of ABC predictions (which is not the framework of Gael’s talk). If we imagine a highly intractable setting, it may be that the marginal generation of a predicted value at time t+1 requires the generation of the entire past from time 1 till time t. Possibly because of a massive dependence on latent variables. And the absence of particle filters. if this makes any sense. Therefore, based on a generated parameter value θ it may be that the entire series needs be simulated to reach the last value in the series. Even when unnecessary this may be an alternative to conditioning upon the actual series. In this later case, comparing both predictions may act as a natural measure of distance since one prediction is a function or statistic of the actual data while the other is a function of the simulated data. Another direction I mused about is the use of (handy) auxiliary models, each producing a prediction as a new statistic, which could then be merged and weighted (or even selected) by a random forest procedure. Again, if the auxiliary models are relatively well-behaved, timewise, this would be quite straightforward to implement.