Archive for Bayesian learning

end-to-end Bayesian learning [CIRM]

Posted in Books, Kids, Mountains, pictures, Running, Statistics, University life with tags , , , , , , , , , , , , , , , , , on February 1, 2021 by xi'an

Next Fall, there will be a workshop at CIRM, Luminy, Marseilles, on Bayesian learning. It takes place 22-29 October 2021 on this wonderful campus at the border with the beautiful Parc National des Calanques, in a wonderfully renovated CIRM building and involves friends and colleagues of mine as organisers and plenary speakers. (I am not involved!, but plan to organise a scalable MCMC workshop there the year after!) The conference is well-supported and the housing fees will be minimal since the centre is also subsidized by CNRS. The deadline for contributed talks and posters is 22 March, while it is 15 June for registration. Hopefully by this time the horizon will have cleared up enough to consider traveling and meeting again. Hopefully. (In which case I will miss this wonderful conference due to other meeting and teaching commitments in the Fall.)

online approximate Bayesian learning

Posted in Statistics with tags , , , , , , , on September 25, 2020 by xi'an

My friends and coauthors Matthieu Gerber and Randal Douc have just arXived a massive paper on online approximate Bayesian learning, namely the handling of the posterior distribution on the parameters of a state-space model, which remains a challenge to this day… Starting from the iterated batch importance sampling (IBIS) algorithm of Nicolas (Chopin, 2002) which he introduced in his PhD thesis. The online (“by online we mean that the memory and computational requirement to process each observation is finite and bounded uniformly in t”) method they construct is guaranteed for the approximate posterior to converge to the (pseudo-)true value of the parameter as the sample size grows to infinity, where the sequence of approximations is a Cesaro mixture of initial approximations with Gaussian or t priors, AMIS like. (I am somewhat uncertain about the notion of a sequence of priors used in this setup. Another funny feature is the necessity to consider a fat tail t prior from time to time in this sequence!) The sequence is in turn approximated by a particle filter. The computational cost of this IBIS is roughly in O(NT), depending on the regeneration rate.

Bayesian workshop in the French Alps

Posted in Statistics with tags , , , , , , , on June 22, 2018 by xi'an