Today is the first day of the FUSION workshop Rémi Bardenet and myself organised. Due to schedule clashes, I will alas not be there, since [no alas!] at the BNP conference in Chili. The program and collection of participants is quite exciting and I hope more fusion will result from this meeting. Enjoy! (And beware of boars, cold water, and cliffs!!!)
Archive for particle filter
Fusion at CIRM
Posted in Mountains, pictures, Statistics, Travel, University life with tags ABC, Bayesian non-parametrics, BNP, boar, Chili, CIRM, cold water swimming, data privacy, fusion, Les Calanques, Luminy, Luminy campus, Méditerranée, MCMC, Parc National des Calanques, particle filter, SMC, Université Aix Marseille, workshop on October 24, 2022 by xi'anlikelihood-free nested sampling
Posted in Books, Statistics with tags ABC, ABC-SMC, Approximate Bayesian computation, nested sampling, particle filter, PLoS computational biology, pMCMC on April 11, 2022 by xi'anLast week, I came by chance across a paper by Jan Mikelson and Mustafa Khammash on a likelihood-free version of nested sampling (a popular keyword on the ‘Og!). Published in 2020 in PLoS Comput Biol. The setup is a parameterised and hidden state-space model, which allows for an approximation of the (observed) likelihood function L(θ|y) by means of a particle filter. An immediate issue with this proposal is that a novel filter need be produced for a new value of the parameter θ, which makes it enormously expensive. It then gets more bizarre as the [Monte Carlo] distribution of the particle filter approximation ô(θ|y) is agglomerated with the original prior π(θ) as a joint “prior” [despite depending on the observed y] and a nested sampling is conducted with level sets of the form
ô(θ|y)>ε.
Actually, if the Monte Carlo error was null, that is, if the number of particles was infinite,
ô(θ|y)=L(θ|y)
implies that this is indeed the original nested sampler. Simulation from the restricted region is done by constructing an extra density estimator of the constrained distribution (in θ)…
“We have shown how using a Monte Carlo estimate over the livepoints not only results in an unbiased estimator of the Bayesian evidence Z, but also allows us to derive a formulation for a lower bound on the achievable variance in each iteration (…)”
As shown by the above the authors insist on the unbiasedness of the particle approximation, but since nested sampling is not producing an unbiased estimator of the evidence Z, the point is somewhat moot. (I am also rather surprised by the reported lack of computing time benefit in running ABC-SMC.)
no filter [no jatp]
Posted in Kids, Mountains, pictures with tags California, climate change, Concord, forest fires, getty images, global warming, jatp, Limeridge, particle filter, red sun, The New Abnormal, USA on September 14, 2020 by xi'anIMS workshop [day 4]
Posted in pictures, Statistics, Travel, University life with tags Feynman-Kac formalism, National University Singapore, NUS, optimal transport, particle filter, particle filters, particle MCMC, pseudo-marginal MCMC, sunrise, transportation model on August 31, 2018 by xi'an While I did not repeat the mistake of yesterday morning, just as well because the sun was unbearably strong!, I managed this time to board a bus headed in the wrong direction and as a result went through several remote NUS campi! Missing the first talk of the day as a result. By Youssef Marzouk, with a connection between sequential Monte Carlo and optimal transport. Transport for sampling, that is. The following talk by Tiangang Cui was however related, with Marzouk a co-author, as it aimed at finding linear transforms towards creating Normal approximations to the target to be used as proposals in Metropolis algorithms. Which may sound like something already tried a zillion times in the MCMC literature, except that the setting was rather specific to some inverse problems, imposing a generalised Normal structure on the transform, then optimised by transport arguments. It is unclear to me [from just attending the talk] how complex this derivation is and how dimension steps in, but the produced illustrations were quite robust to an increase in dimension.
The remaining talks for the day were mostly particular, from Anthony Lee introducing a new and almost costless way of producing variance estimates in particle filters, exploiting only the ancestry of particles, to Mike Pitt discussing the correlated pseudo-marginal algorithm developed with George Deligiannidis and Arnaud Doucet. Which somewhat paradoxically managed to fight the degeneracy [i.e., the need for a number of terms increasing like the time index T] found in independent pseudo-marginal resolutions, moving down to almost log(T)… With an interesting connection to the quasi SMC approach of Mathieu and Nicolas. And Sebastian Reich also stressed the links with optimal transport in a talk about data assimilation that was way beyond my reach. The day concluded with fireworks, through a magistral lecture by Professeur Del Moral on a continuous time version of PMCMC using the Feynman-Kac terminology. Pierre did a superb job during his lecture towards leading the whole room to the conclusion.
unbiased consistent nested sampling via sequential Monte Carlo [a reply]
Posted in pictures, Statistics, Travel with tags auxiliary variable, Brisbane, evidence, marginal likelihood, nested sampling, Og, particle filter, QUT, unbiasedness on June 13, 2018 by xi'anRob Salomone sent me the following reply on my comments of yesterday about their recently arXived paper.
“Which never occurred as the number one difficulty there, as the simplest implementation runs a Markov chain from the last removed entry, independently from the remaining entries. Even stationarity is not an issue since I believe that the first occurrence within the level set is distributed from the constrained prior.”
“And then, in a twist that is not clearly explained in the paper, the focus moves to an improved nested sampler that moves one likelihood value at a time, with a particle step replacing a single particle. (Things get complicated when several particles may take the very same likelihood value, but randomisation helps.) At this stage the algorithm is quite similar to the original nested sampler. Except for the unbiased estimation of the constants, the final constant, and the replacement of exponential weights exp(-t/N) by powers of (N-1/N)”