**T**he second day of the ABC wwworkshop got a better start than yesterday [for me] as I managed to bike to Dauphine early enough to watch the end of Gael’s talk and Matias Quiroz’ in full on the Australian side (of zoom). With an interesting take on using frequency-domain (pseudo-)likelihoods in complex models. Followed by two talks by David Frazier from Monash and Chris Drovandi from Brisbane on BSL, the first on misspecification with a finer analysis as to why synthetic likelihood may prove worse: the Mahalanobis distance behind it may get very small and the predictive distribution of the distance may become multimodal. Also pointing out the poor coverage of both ABC and BSL credible intervals. And Chris gave a wide-ranging coverage of summary-free likelihood-free approaches, with examples where they were faring well against some summary-based solutions. Olivier from Grenoble [with a co-author from Monash, keeping up the Australian theme] discussed dimension reductions which could possibly lead to better summary statistics, albeit unrelated with ABC!

Riccardo Corradin considered this most Milanese problem of all problems (!), namely how to draw inference on completely random distributions. The clustering involved in this inference being costly, the authors using our Wasserstein ABC approach on the partitions, with a further link to our ABC-Gibbs algorithm (which Grégoire had just presented) for the tolerance selection. Marko Järvenpää presented an approach related with a just-published paper in Bayesian Analysis. with a notion of noisy likelihood modelled as a Gaussian process. Towards avoiding evaluating the (greedy) likelihood too often, as in the earlier Korrakitara et al. (2014). And coining the term of *Bayesian Metropolis-Hastings sampler* (as the regular Metropolis (Rosenbluth) is frequentist)! And Pedro Rodrigues discussed using normalising flows in poorly identified (or inverse) models. Raising the issue of validating this approximation to the posterior and connecting with earlier talks.

The afternoon session was a reply of the earliest talks from the Australian mirrors. Clara Grazian gave the first talk yesterday on using and improving a copula-based ABC, introducing empirical likelihood, Gaussian processes and splines. Leading to a question as to whether or not the copula family could be chosen by ABC tools. David Nott raised the issue of conflicting summary statistics. Illustrated by a Poisson example where using the pair made by the empirical mean and the empirical variance as summary: while the empirical mean is sufficient, conditioning on both leads to a different ABC outcome. Which indirectly relates to a work in progress in our Dauphine group. Anthony Ebert discussed the difficulty of handling state space model parameters with ABC. In an ABCSMC² version, the likelihood is integrated out by a particle filter approximation but leading to difficulties with the associated algorithm, which I somewhat associate with the discrete nature of the approximation, possibly incorrectly. Jacob Priddle’s talked about a whitening version of Bayesian synthetic likelihood. By arguing that the variance of the Monte Carlo approximation to the moments of the Normal synthetic likelihood is much improved when assuming that the components of the summary statistic are independent. I am somewhat puzzled by the proposal, though, in that the whitening matrix need be estimated as well.

Thanks to all colleagues and friends involved in building and running the mirrors and making some exchanges possible despite the distances and time differences! Looking forward a genuine ABC meeting in a reasonable future, and who knows?!, reuniting in Svalbard for real! (The temperature in Longyearbyen today was -14⁰, if this makes some feel better about missing the trip!!!) Rather than starting a new series of “ABC not in…”