**S**till woke up early too early, which let me go for a long run in Mont Royal (which felt almost immediately familiar from earlier runs at MCM 2017!) at dawn and at a pleasant temperature (but missed the top bagel bakery on the way back!). Skipped the morning plenary lectures to complete recommendation letters and finishing a paper submission. But had a terrific lunch with a good friend I had not seen in Covid-times, at a local branch of Kinton Ramen which I already enjoyed in Vancouver as my Airbnb was located on top of it.

I chaired the afternoon Bayesian computations session with Onur Teymur presenting the general spirit of his Neurips 21 paper on black box probabilistic numerics. Mentioning that a new textbook on the topic by Phillip Henning, Michael Osborne, and Hans Kersting had appeared today! The second talk was by Laura Bondi who discussed an ABC model choice approach to assess breast cancer screening. With enough missing data (out of 78051 women followed over 12 years) to lead to an intractable likelihood. Starting with vanilla ABC using 32 summaries and moving to our random forest approach. Unsurprisingly concluding with different top models, but not characterising the identifiability provided by the choice of the summaries. The third talk was by Ryan Chan (fresh Warwick PhD recipient), about a Fusion divide-and-conquer approach that avoids the approximation of earlier approaches. In particular he uses a clever accept-reject algorithm to generate a product of densities using the component densities. A nice trick that Murray explained to me while visiting in Paris lg ast month. (The approach appears to be parameterisation dependent.) The final talk was by Umberto Picchini and in a sort the synthetic likelihood mirror of Massi’s talk yesterday, in the sense of constructing a guided proposal relying on observed summaries. If not comparing both approaches on a given toy like the g-and-k distribution.