Archive for McKean-Vlasov SDE

séminaire parisien de statistique [09/01/23]

Posted in Books, pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , on January 22, 2023 by xi'an

I had missed the séminaire parisien de statistique for most of the Fall semester, hence was determined to attend the first session of the year 2023, the more because the talks were close to my interest. To wit, Chiara Amorino spoke about particle systems for McKean-Vlasov SDEs, when those are parameterised by several parameters, when observing repeatedly discretised versions, hereby establishing the consistence of a contrast estimator of these estimators. I was initially confused by the mention of interacting particles, since the work is not at all about related with simulation. Just wondering whether this contrast could prove useful for a likelihood-free approach in building a Gibbs distribution?

Valentin de Bortoli then spoke on diffusion Schrödinger bridges for generative models, which allowed me to better my understanding of this idea presented by Arnaud at the Flatiron workshop last November. The presentation here was quite different, using a forward versus backward explanation via a sequence of transforms that end up approximately Gaussian, once more reminiscent of sequential Monte Carlo. The transforms are themselves approximate Gaussian versions relying on adiscretised Ornstein-Ulhenbeck process, with a missing score term since said score involves a marginal density at each step of the sequence. It can be represented [as below] as an expectation conditional on the (observed) variate at time zero (with a connection with Hyvärinen’s NCE / score matching!) Practical implementation is done via neural networks.

Last but not least!, my friend Randal talked about his Kick-Kac formula, which connects with the one we considered in our 2004 paper with Jim Hobert. While I had heard earlier version, this talk was mostly on probability aspects and highly enjoyable as he included some short proofs. The formula is expressing the stationary probability measure π of the original Markov chain in terms of explorations between two visits to an accessible set C, more general than a small set. With at first an annoying remaining term due to the set not being Harris recurrent but which eventually cancels out. Memoryless transportation can be implemented because C is free for the picking, for instance the set where the target is bounded by a manageable density, allowing for an accept-reject step. The resulting chain is non-reversible. However, due to the difficulty to simulate from the target restricted to C, a second and parallel Markov chain is instead created. Performances, unsurprisingly, depend on the choice of C, but it can be adapted to the target on the go.

%d bloggers like this: