As emailed to me by Aki Vehtari, the next StanCon will take place this summer in the wonderful city of Helsinki, at the end of August. On Aalto University Töölö Campus precisely. The list of speakers and tutorial teachers is available on the webpage. (The only “negative point” is that the conference does not include a Tuesday, the night of the transcendence 2 miles race!) Somewhat concluding this never-ending summer of Bayesian conferences!

## Archive for Helsinki

## StanCon in Helsinki [29-31 Aug 2018]

Posted in Books, pictures, R, Statistics, Travel, University life with tags Aalto Science Institute, Baltic Sea, Bayesian Analysis, Bayesian conference, Finland, Helsinki, STAN, StanCon 2018, summer on March 7, 2018 by xi'an## ABC gas

Posted in pictures, Running, Travel with tags ABC, ABC in Helsinki, brands, Finland, gas station, Helsinki, Munkkiniemen, tramways on August 9, 2017 by xi'an## European statistics in Finland [EMS17]

Posted in Books, pictures, Running, Statistics, Travel, University life with tags ABC, AISTATS 2016, Amazon, AMIS, Bayesian optimisation, deterministic mixtures, EMS 2017, Europe, European Meeting of Statisticians, exact Monte Carlo, Helsinki, INLA, particle filters, probabilistic numerics, University of Helsinki on August 2, 2017 by xi'an**W**hile this European meeting of statisticians had a wide range of talks and topics, I found it to be more low key than the previous one I attended in Budapest, maybe because there was hardly any talk there in applied probability. (But there were some sessions in mathematical statistics and Mark Girolami gave a great entry to differential geometry and MCMC, in the spirit of his 2010 discussion paper. Using our recent trip to Montréal as an example of geodesic!) In the Bayesian software session [organised by Aki Vetahri], Javier Gonzáles gave a very neat introduction to Bayesian optimisation: he showed how optimisation can be turned into Bayesian inference or more specifically as a Bayesian decision problem using a loss function related to the problem of interest. The point in following a Bayesian path [or probabilist numerics] is to reduce uncertainty by the medium of prior measures on functions, although resorting [as usual] to Gaussian processes whose arbitrariness I somehow dislike within the infinity of priors (aka stochastic processes) on functions! One of his strong arguments was that the approach includes the possibility for design in picking the next observation point (as done in some ABC papers of Michael Guttman and co-authors, incl. the following talk at EMS 2017) but again the devil may be in the implementation when looking at minimising an objective function… The notion of the myopia of optimisation techniques was another good point: only looking one step ahead in the future diminishes the returns of the optimisation and an alternative presented at AISTATS 2016 [that I do not remember seeing in Càdiz] goes against this myopia.

Umberto Piccini also gave a talk on exploiting synthetic likelihoods in a Bayesian fashion (in connection with the talk he gave last year at MCqMC 2016). I wondered at the use of INLA for this Gaussian representation, as well as at the impact of the parameterisation of the summary statistics. And the session organised by Jean-Michel involved Jimmy Olson, Murray Pollock (Warwick) and myself, with great talks from both other speakers, on PaRIS and PaRISian algorithms by Jimmy, and on a wide range of exact simulation methods of continuous time processes by Murray, both managing to convey the intuition behind their results and avoiding the massive mathematics at work there. By comparison, I must have been quite unclear during my talk since someone interrupted me about how Owen & Zhou (2000) justified their deterministic mixture importance sampling representation. And then left when I could not make sense of his questions [or because it was lunchtime already].

## Helsingin satama ja katedraali [jatp]

Posted in pictures, Running, Travel, University life with tags Baltic Sea, boats, EMS 2017, Finland, Gulf of Bothnia, Helsinki, jatp, orthodox cathedral, sunrise on July 27, 2017 by xi'an## Self-Transcendence 2 mile races in Helsinki

Posted in Running, Travel, University life with tags 2 miles, 6 minutes mile, EMS 2017, Finland, Helsinki, Hinduism, Munkkiniemen, road race, self-transcendence, Sri Chinmoy, Sri Chinmoy Marathon, tramways on July 26, 2017 by xi'an**U**pon our arrival in Helsinki for EMS 2017, Jean-Michel Marin pointed out to me the existence of a 2 miles race the next day [as every Tuesday], indicated on the webpage of the conference. And encouraged me to run it. As I had brought my running gear and did not need preparation for a 2 miles [a mere loop in the parc!], I decided to try the race and we took a tram all the way to a faraway suburb where a few other runners had gathered. It was very relaxed and friendly, with recyclable cloth bibs and free juice available. Against all odds, I managed to win the race, with a first mile under six minutes thanks to another runner rushing me. And got a tee shirt as a reward. (Checking on Wikipedia later, I found that Sri Chinmoy was an Indian guru advocating running as a form of meditation, which I find rather absurd as I am anything but meditating when running a race..!)

## Takaisin helsinkiin

Posted in pictures, Statistics, Travel with tags ABCruise, conference, EMS 2017, Europe, ferry harbour, Finland, folded Markov chain, Helsinki, North, Randal Douc, Scandinavia on July 23, 2017 by xi'an**I** am off tomorrow morning to Helsinki for the European Meeting of Statisticians (EMS 2017). Where I will talk on how to handle multiple estimators in Monte Carlo settings (although I have not made enough progress in this direction to include anything truly novel in the talk!) Here are the slides:

I look forward this meeting, as I remember quite fondly the previous one I attended in Budapest. Which was of the highest quality in terms of talks and interactions. (I also remember working hard with Randal Douc on a yet-unfinished project!)

## ABC random forests for Bayesian parameter inference

Posted in Books, Kids, R, Statistics, Travel, University life, Wines with tags ABC approximation error, ABC in Helsinki, abcrf, ABCruise, arXiv, Baltic Sea, Bayesian inference, Gulf of Bothnia, Helsinki, Lapin Kulta, out-of-bag correction, R, random forests, reference table, sunrise on May 20, 2016 by xi'an**B**efore leaving Helsinki, we arXived [from the Air France lounge!] the paper Jean-Michel presented on Monday at ABCruise in Helsinki. This paper summarises the experiments Louis conducted over the past months to assess the great performances of a random forest regression approach to ABC parameter inference. Thus validating in this experimental sense the use of this new approach to conducting ABC for Bayesian inference by random forests. (And not ABC model choice as in the Bioinformatics paper with Pierre Pudlo and others.)

I think the major incentives in exploiting the (still mysterious) tool of random forests [against more traditional ABC approaches like Fearnhead and Prangle (2012) on summary selection] are that (i) forests do not require a preliminary selection of the summary statistics, since an arbitrary number of summaries can be used as input for the random forest, even when including a large number of useless white noise variables; (b) there is no longer a tolerance level involved in the process, since the many trees in the random forest define a natural if rudimentary distance that corresponds to being or not being in the same leaf as the observed vector of summary statistics η(y); (c) the size of the reference table simulated from the prior (predictive) distribution does not need to be as large as for in usual ABC settings and hence this approach leads to significant gains in computing time since the production of the reference table usually is the costly part! To the point that deriving a different forest for each univariate transform of interest is truly a minor drag in the overall computing cost of the approach.

An intriguing point we uncovered through Louis’ experiments is that an unusual version of the variance estimator is preferable to the standard estimator: we indeed exposed better estimation performances when using a weighted version of the out-of-bag residuals (which are computed as the differences between the simulated value of the parameter transforms and their expectation obtained by removing the random trees involving this simulated value). Another intriguing feature [to me] is that the regression weights as proposed by Meinshausen (2006) are obtained as an average of the inverse of the number of terms in the leaf of interest. When estimating the posterior expectation of a transform h(θ) given the observed η(y), this summary statistic η(y) ends up in a given leaf for each tree in the forest and all that matters for computing the weight is the number of points from the reference table ending up in this very leaf. I do find this difficult to explain when confronting the case when many simulated points are in the leaf against the case when a single simulated point makes the leaf. This single point ends up being much more influential that all the points in the other situation… While being an outlier of sorts against the prior simulation. But now that I think more about it (after an expensive Lapin Kulta beer in the Helsinki airport while waiting for a change of tire on our airplane!), it somewhat makes sense that rare simulations that agree with the data should be weighted much more than values that stem from the prior simulations and hence do not translate much of an information brought by the observation. (If this sounds murky, blame the beer.) What I found great about this new approach is that it produces a non-parametric evaluation of the cdf of the quantity of interest h(θ) at no calibration cost or hardly any. (An R package is in the making, to be added to the existing R functions of abcrf we developed for the ABC model choice paper.)