A brand new monthly series of Parisian seminars on the theory and practice of Monte Carlo in statistics and data science, in conjunction with our ERC OCEAN project. To kick start the series the organisers, Joshua Bon and Andrea Bertazzi, first postdocs in the project, will present some of their work on Friday 13 October, 4PM – 6PM, Room 7, PariSanté Campus 2 Rue d’Oradour-sur-Glane, Paris 15. The following seminars are planned on Friday 17 November and Friday 15 December.
4pm/16h CEST: Piecewise deterministic sampling with splitting schemes
Andrea Bertazzi, CMAP – École Polytechnique
Piecewise deterministic Markov processes (PDMPs) received substantial interest in recent years as an alternative to classical Markov chain Monte Carlo algorithms. While theoretical properties of PDMPs have been studied extensively, their practical implementation remains limited to specific applications in which bounds on the gradient of the negative log-target can be derived. In order to address this problem, we propose to approximate PDMPs using splitting schemes, that means simulating the deterministic dynamics and the random jumps in two different stages. We show that symmetric splittings of PDMPs are of second order. Then we focus on the Zig-Zag sampler (ZZS) and show how to remove the bias of the splitting scheme with a skew reversible Metropolis filter. Finally, we illustrate with numerical simulations the advantages of our proposed scheme over competitors.
5pm/17h CEST: Bayesian score calibration for approximate models
Joshua Bon, Ceremade – Université Paris Dauphine-PSL
Scientists continue to develop increasingly complex mechanistic models to reflect their knowledge more realistically. Statistical inference using these models can be challenging since the corresponding likelihood function is often intractable and model simulation may be computationally burdensome. Fortunately, in many of these situations, it is possible to adopt a surrogate model or approximate likelihood function. It may be convenient to base Bayesian inference directly on the surrogate, but this can result in bias and poor uncertainty quantification. In this paper we propose a new method for adjusting approximate posterior samples to reduce bias and produce more accurate uncertainty quantification. We do this by optimizing a transform of the approximate posterior that maximizes a scoring rule. Our approach requires only a (fixed) small number of complex model simulations and is numerically stable. We demonstrate good performance of the new method on several examples of increasing complexity.