**T**aking advantage of being in San Francisco, I flew yesterday to Australia over the Pacific, crossing for the first time the day line. The 15 hour Qantas flight to Sydney was remarkably smooth and quiet, with most passengers sleeping for most of the way, and it gave me a great opportunity to go over several papers I wanted to read and review. Over the next week or so, I will work with my friends and co-authors David Frazier and Gael Martin at Monash University (and undoubtedly enjoy the great food and wine scene!). Before flying back to Paris (alas via San Francisco rather than direct).

## Archive for Melbourne

## off to Australia

Posted in pictures, Statistics, Travel, University life, Wines with tags ABC, ABC convergence, asymptotic normality, Australia, consistency, Melbourne, Monash University, Qantas, San Francisco, Yarra river on August 22, 2016 by xi'an## asymptotic properties of Approximate Bayesian Computation

Posted in pictures, Statistics, Travel, University life with tags ABC, asymptotic normality, Australia, Bayesian inference, concentration inequalities, consistency, convergence, identifiability, Melbourne, Monash University, summary statistics on July 26, 2016 by xi'an**W**ith David Frazier and Gael Martin from Monash University, and with Judith Rousseau (Paris-Dauphine), we have now completed and arXived a paper entitled *Asymptotic Properties of Approximate Bayesian Computation*. This paper undertakes a fairly complete study of the large sample properties of ABC under weak regularity conditions. We produce therein sufficient conditions for posterior concentration, asymptotic normality of the ABC posterior estimate, and asymptotic normality of the ABC posterior mean. Moreover, those (theoretical) results are of significant import for practitioners of ABC as they pertain to the choice of tolerance ε used within ABC for selecting parameter draws. In particular, they [the results] contradict the conventional ABC wisdom that this tolerance should always be taken as *small* as the computing budget allows.

Now, this paper bears some similarities with our earlier paper on the consistency of ABC, written with David and Gael. As it happens, the paper was rejected after submission and I then discussed it in an internal seminar in Paris-Dauphine, with Judith taking part in the discussion and quickly suggesting some alternative approach that is now central to the current paper. The previous version analysed Bayesian consistency of ABC under specific uniformity conditions on the summary statistics used within ABC. But conditions for consistency are now much weaker conditions than earlier, thanks to Judith’s input!

There are also similarities with Li and Fearnhead (2015). Previously discussed here. However, while similar in spirit, the results contained in the two papers strongly differ on several fronts:

- Li and Fearnhead (2015) considers an ABC algorithm based on kernel smoothing, whereas our interest is the original ABC accept-reject and its many derivatives
- our theoretical approach permits a complete study of the asymptotic properties of ABC, posterior concentration, asymptotic normality of ABC posteriors, and asymptotic normality of the ABC posterior mean, whereas Li and Fearnhead (2015) is only concerned with asymptotic normality of the ABC posterior mean estimator (and various related point estimators);
- the results of Li and Fearnhead (2015) are derived under very strict uniformity and continuity/differentiability conditions, which bear a strong resemblance to those conditions in Yuan and Clark (2004) and Creel et al. (2015), while the result herein do not rely on such conditions and only assume very weak regularity conditions on the summaries statistics themselves; this difference allows us to characterise the behaviour of ABC in situations not covered by the approach taken in Li and Fearnhead (2015);

## postdoc position at Monash, Melbourne

Posted in Kids, pictures, Statistics, Travel, University life with tags astrostatistics, Australia, machine learning, Melbourne, Monash, postdoctoral position, Victoria on June 21, 2016 by xi'an*[David Dowe sent me the following ad for a position of research fellow in statistics, machine learning, and Astrophysics at Monash University, Melbourne.]*

RESEARCH FELLOW: in Statistics and Machine Learning for Astrophysics, Monash University, Australia, deadline 31 July.

We seek to fill a 2.5 year post-doctoral fellowship dedicated to extensions and applications of the Bayesian Minimum Message Length (MML) technique to the analysis of spectroscopic data from recent large astronomical surveys, such as GALAH (GALactic Archaeology with HERMES). The position is based jointly within the Monash Centre for Astrophysics (MoCA, in the School of Physics and Astronomy) and the Faculty of Information Technology (FIT).

The successful applicant will develop and extend the MML method as needed, applying it to spectroscopic data from the GALAH project, with an aim to understanding nucleosynthesis in stars as well as the formation and evolution of our Galaxy (“galactic archaeology”). The position is based at the Clayton campus (in suburban Melbourne, Australia) of Monash University, which hosts approximately 56,000 equivalent full-time students spread across its Australian and off-shore campuses, and approximately 3500 academic staff.

The successful applicant will work with world experts in both the Bayesian information-theoretic MML method as well as nuclear astrophysics. The immediate supervisors will be Professor John Lattanzio (MoCA), Associate Professor David Dowe (FIT) and Dr Aldeida Aleti (FIT).

## auxiliary likelihood-based approximate Bayesian computation in state-space models

Posted in Books, pictures, Statistics, University life with tags ABC, auxiliary model, consistency, Kalman filter, Melbourne, Monash University, score function, summary statistics on May 2, 2016 by xi'an**W**ith Gael Martin, Brendan McCabe, David T. Frazier, and Worapree Maneesoonthorn, we arXived (and submitted) a strongly revised version of our earlier paper. We begin by demonstrating that reduction to a set of *sufficient* statistics of reduced dimension relative to the sample size is infeasible for most state-space models, hence calling for the use of *partial* posteriors in such settings. Then we give conditions [like parameter identification] under which ABC methods are Bayesian consistent, when using an auxiliary model to produce summaries, either as MLEs or [more efficiently] scores. Indeed, for the order of accuracy required by the ABC perspective, scores are equivalent to MLEs but are computed much faster than MLEs. Those conditions happen to to be weaker than those found in the recent papers of Li and Fearnhead (2016) and Creel et al. (2015). In particular as we make no assumption about the limiting distributions of the summary statistics. We also tackle the dimensionality curse that plagues ABC techniques by numerically exhibiting the improved accuracy brought by looking at marginal rather than joint modes. That is, by matching individual parameters via the corresponding scalar score of the *integrated* auxiliary likelihood rather than matching on the multi-dimensional score statistics. The approach is illustrated on realistically complex models, namely a (latent) Ornstein-Ulenbeck process with a discrete time linear Gaussian approximation is adopted and a Kalman filter auxiliary likelihood. And a square root volatility process with an auxiliary likelihood associated with a Euler discretisation and the augmented unscented Kalman filter. In our experiments, we compared our auxiliary based technique to the two-step approach of Fearnhead and Prangle (in the Read Paper of 2012), exhibiting improvement for the examples analysed therein. Somewhat predictably, an important challenge in this approach that is common with the related techniques of indirect inference and efficient methods of moments, is the choice of a computationally efficient and accurate auxiliary model. But most of the current ABC literature discusses the role and choice of the summary statistics, which amounts to the same challenge, while missing the regularity provided by score functions of our auxiliary models.

## Peter Hall (1951-2016)

Posted in Books, Statistics, Travel, University life with tags ACEMS, Australia, Belgium, bootstrap, CORE, Glasgow, Louvain, martingales, Melbourne, non-parametrics, obituary, Peter Hall, Scotland, University of Glasgow on January 10, 2016 by xi'an**I **just heard that Peter Hall passed away yesterday in Melbourne. Very sad news from down under. Besides being a giant in the fields of statistics and probability, with an astounding publication record, Peter was also a wonderful man and so very much involved in running local, national and international societies. His contributions to the field and the profession are innumerable and his loss impacts the entire community. Peter was a regular visitor at Glasgow University in the 1990s and I crossed paths with him a few times, appreciating his kindness as well as his highest dedication to research. In addition, he was a gifted photographer and I recall that the [now closed] wonderful guest-house where we used to stay at the top of Hillhead had a few pictures of his taken in the Highlands and framed on its walls. (If I remember well, there were also beautiful pictures of the Belgian countryside by him at CORE, in Louvain-la-Neuve.) I think the last time we met was in Melbourne, three years ago… Farewell, Peter, you certainly left an indelible print on a lot of us.

*[Song Chen from Beijing University has created a memorial webpage for Peter Hall to express condolences and share memories.]*

## consistency of ABC

Posted in pictures, Statistics, Travel, University life with tags ABC, consistency, convergence diagnostics, Ian Potter collection, identifiability, indirect inference, MA(p) model, Melbourne, Monash University, ODEs on August 25, 2015 by xi'an**A**long with David Frazier and Gael Martin from Monash University, Melbourne, we have just completed (and arXived) a paper on the (Bayesian) consistency of ABC methods, producing sufficient conditions on the summary statistics to ensure consistency of the ABC posterior. Consistency in the sense of the prior concentrating at the true value of the parameter when the sample size and the inverse tolerance (intolerance?!) go to infinity. The conditions are essentially that the summary statistics concentrates around its mean and that this mean identifies the parameter. They are thus weaker conditions than those found earlier consistency results where the authors considered convergence to the genuine posterior distribution (given the summary), as for instance in Biau et al. (2014) or Li and Fearnhead (2015). We do not require here a specific rate of decrease to zero for the tolerance ε. But still they do not hold all the time, as shown for the MA(2) example and its first two autocorrelation summaries, example we started using in the Marin et al. (2011) survey. We further propose a consistency assessment based on the main consistency theorem, namely that the ABC-based estimates of the marginal posterior densities for the parameters should vary little when adding extra components to the summary statistic, densities estimated from simulated data. And that the mean of the resulting summary statistic is indeed one-to-one. This may sound somewhat similar to the stepwise search algorithm of Joyce and Marjoram (2008), but those authors aim at obtaining a vector of summary statistics that is as informative as possible. We also examine the consistency conditions when using an auxiliary model as in indirect inference. For instance, when using an AR(2) auxiliary model for estimating an MA(2) model. And ODEs.

## locally weighted MCMC

Posted in Books, Statistics, University life with tags Australia, effective sample size, Harvard University, Melbourne, parallel MCMC, Rao-Blackwellisation, recycling, St Kilda, vanilla Rao-Blackwellisation on July 16, 2015 by xi'an**L**ast week, on arXiv, Espen Bernton, Shihao Yang, Yang Chen, Neil Shephard, and Jun Liu (all from Harvard) proposed a weighting scheme to associated MCMC simulations, in connection with the parallel MCMC of Ben Calderhead discussed earlier on the ‘Og. The weight attached to each proposal is either the acceptance probability itself (with the rejection probability being attached to the current value of the MCMC chain) or a renormalised version of the joint target x proposal, either forward or backward. Both solutions are unbiased in that they have the same expectation as the original MCMC average, being some sort of conditional expectation. The proof of domination in the paper builds upon Calderhead’s formalism.

This work reminded me of several reweighting proposals we made over the years, from the global Rao-Blackwellisation strategy with George Casella, to the vanilla Rao-Blackwellisation solution we wrote with Randal Douc a few years ago, both of whom also are demonstrably improving upon the standard MCMC average. By similarly recycling proposed but rejected values. Or by diminishing the variability due to the uniform draw. The slightly parallel nature of the approach also connects with our parallel MCM version with Pierre Jacob (now Harvard as well!) and Murray Smith (who now leaves in Melbourne, hence the otherwise unrelated picture).