## Archive for University of Edinburgh

## Bayes plaque

Posted in Books, pictures, Statistics, Travel, University life with tags Bayes theorem, Edinburgh, FRS, plaque, Royal Society, Scotland, Thomas Bayes, University of Edinburgh on November 22, 2019 by xi'an## at the centre of Bayes

Posted in Mountains, pictures, Statistics, Travel, University life with tags 2019 Education Buildings Scotland Awards, Arthur's Seat, Bayes Café, Bayes Centre, Brown University, ECERM, Edinburgh, Holyrood Park, ICMS, jatp, Maxwell Institute Colloquium, Maxwell Institute Graduate School, Scotland, Scottish sun, seminar, Thomas Bayes, University of Edinburgh on October 14, 2019 by xi'an## HW AMS & EPSRC MAG-MIGS CDT seminar

Posted in Statistics with tags ABC convergence, Bayes Centre, Centre for Doctoral Training, Edinburgh, EPSRC, Heriot-Watt University, Maxwell Institute Graduate School, misspecification, Scotland, University of Edinburgh on October 10, 2019 by xi'an**S**ome explanation for all these acronyms! I am giving a Actuarial Mathematics & Statistics (AMS) seminar at Heriot-Watt (HW) University, in Edinburgh, tomorow. But in the (new) Bayes Centre, at the University of Edinburgh, rather than on the campus of Heriot-Watt, as this is also the launching day of the Centre for Doctoral Training (CDT) on Mathematical Modelling, Analysis, & Computation (MAG) shared between Heriot-Watt, and the University of Edinburgh, funded by the EPSRC and located in the Maxwell Institute Graduate School (MIGS) in its Bayes Centre. My talk will be on ABC convergence and misspecification.

## adaptive copulas for ABC

Posted in Statistics with tags ABC, ABC in Edinburgh, ABC-SMC, curse of dimensionality, Gaussian copula, neural network, post-processing, sequential ABC, University of Edinburgh on March 20, 2019 by xi'anA paper on ABC I read on my way back from Cambodia: Yanzhi Chen and Michael Gutmann arXived an ABC [in Edinburgh] paper on learning the target via Gaussian copulas, to be presented at AISTATS this year (in Okinawa!). Linking post-processing (regression) ABC and sequential ABC. The drawback in the regression approach is that the correction often relies on an homogeneity assumption on the distribution of the noise or residual since this approach only applies a drift to the original simulated sample. Their method is based on two stages, a coarse-grained one where the posterior is approximated by ordinary linear regression ABC. And a fine-grained one, which uses the above coarse Gaussian version as a proposal and returns a Gaussian copula estimate of the posterior. This proposal is somewhat similar to the neural network approach of Papamakarios and Murray (2016). And to the Gaussian copula version of Li et al. (2017). The major difference being the presence of two stages. The new method is compared with other ABC proposals at a fixed simulation cost, which does not account for the construction costs, although they should be relatively negligible. To compare these ABC avatars, the authors use a symmetrised Kullback-Leibler divergence I had not met previously, requiring a massive numerical integration (although this is not an issue for the practical implementation of the method, which only calls for the construction of the neural network(s)). Note also that sequential ABC is only run for two iterations, and also that none of the importance sampling ABC versions of Fearnhead and Prangle (2012) and of Li and Fearnhead (2018) are considered, all versions relying on the same vector of summary statistics with a dimension much larger than the dimension of the parameter. Except in our MA(2) example, where regression does as well. I wonder at the impact of the dimension of the summary statistic on the performances of the neural network, i.e., whether or not it is able to manage the curse of dimensionality by ignoring all but essentially the data statistics in the optimisation.

## from Arthur’s Seat [spot ISBA participants]

Posted in Mountains, pictures, Running, Travel with tags ABC in Edinburgh, Arthur's Seat, capital city, conference, Edinburgh, Holyrood, ISBA 2018, Lothians, morning run, Scotland, sunrise, University of Edinburgh, volcanoes on June 27, 2018 by xi'an## Bayesian GANs [#2]

Posted in Books, pictures, R, Statistics with tags ABC in Edinburgh, Bayesian GANs, compatible conditional distributions, Edinburgh, GANs, generative adversarial networks, ISBA 2018, joint posterior, MCMC convergence, Metropolis-within-Gibbs algorithm, Monte Carlo Statistical Methods, normal model, University of Edinburgh on June 27, 2018 by xi'an**A**s an illustration of the lack of convergence of the Gibbs sampler applied to the two “conditionals” defined in the Bayesian GANs paper discussed yesterday, I took the simplest possible example of a Normal mean generative model (one parameter) with a logistic discriminator (one parameter) and implemented the scheme (during an ISBA 2018 session). With flat priors on both parameters. And a Normal random walk as Metropolis-Hastings proposal. As expected, since there is no stationary distribution associated with the Markov chain, simulated chains do not exhibit a stationary pattern,

And they eventually reach an overflow error or a trapping state as the log-likelihood gets approximately to zero (red curve).

Too bad I missed the talk by Shakir Mohammed yesterday, being stuck on the Edinburgh by-pass at rush hour!, as I would have loved to hear his views about this rather essential issue…