Archive for Australia

highway to Hell [so long, Malcom!]

Posted in Kids, pictures with tags , , , , , , on November 18, 2017 by xi'an

lazy ABC…what?!

Posted in Kids, pictures, Statistics with tags , , , , , on November 8, 2017 by xi'an

positions at QUT stats

Posted in Statistics with tags , , , , , , , , on September 4, 2017 by xi'an

Chris Drovandi sent me the information that the Statistics GroupQUT, Brisbane, is advertising for three positions:

This is a great opportunity, a very active group, and a great location, which I visited several times, so if interested apply before October 1.

model misspecification in ABC

Posted in Statistics with tags , , , , , , , , on August 21, 2017 by xi'an

With David Frazier and Judith Rousseau, we just arXived a paper studying the impact of a misspecified model on the outcome of an ABC run. This is a question that naturally arises when using ABC, but that has been not directly covered in the literature apart from a recently arXived paper by James Ridgway [that was earlier this month commented on the ‘Og]. On the one hand, ABC can be seen as a robust method in that it focus on the aspects of the assumed model that are translated by the [insufficient] summary statistics and their expectation. And nothing else. It is thus tolerant of departures from the hypothetical model that [almost] preserve those moments. On the other hand, ABC involves a degree of non-parametric estimation of the intractable likelihood, which may sound even more robust, except that the likelihood is estimated from pseudo-data simulated from the “wrong” model in case of misspecification.

In the paper, we examine how the pseudo-true value of the parameter [that is, the value of the parameter of the misspecified model that comes closest to the generating model in terms of Kullback-Leibler divergence] is asymptotically reached by some ABC algorithms like the ABC accept/reject approach and not by others like the popular linear regression [post-simulation] adjustment. Which suprisingly concentrates posterior mass on a completely different pseudo-true value. Exploiting our recent assessment of ABC convergence for well-specified models, we show the above convergence result for a tolerance sequence that decreases to the minimum possible distance [between the true expectation and the misspecified expectation] at a slow enough rate. Or that the sequence of acceptance probabilities goes to zero at the proper speed. In the case of the regression correction, the pseudo-true value is shifted by a quantity that does not converge to zero, because of the misspecification in the expectation of the summary statistics. This is not immensely surprising but we hence get a very different picture when compared with the well-specified case, when regression corrections bring improvement to the asymptotic behaviour of the ABC estimators. This discrepancy between two versions of ABC can be exploited to seek misspecification diagnoses, e.g. through the acceptance rate versus the tolerance level, or via a comparison of the ABC approximations to the posterior expectations of quantities of interest which should diverge at rate Vn. In both cases, ABC reference tables/learning bases can be exploited to draw and calibrate a comparison with the well-specified case.

two ABC postdocs at Monash

Posted in Statistics with tags , , , , , , on April 4, 2017 by xi'an

For students, postdocs and faculty working on approximate inference, ABC algorithms,  and likelihood-free methods, this announcement of two postdoc positions at Monash University, Melbourne, Australia, to work with Gael Martin, David Frazier and Catherine Forbes should be of strong relevance and particular interest:

The Department of Econometrics and Business Statistics at Monash is looking to fill two postdoc positions in – one for 12 months and the other for 2 years. The positions will be funded (respectively) by the following ARC Discovery grants:

1. DP150101728: “Approximate Bayesian Computation in State Space Models”. (Chief Investigators: Professor Gael Martin and Associate Professor Catherine Forbes; International Partner Investigators: Professor Brendan McCabe and Professor Christian Robert).

2. DP170100729: “The Validation of Approximate Bayesian Computation: Theory and Practice“. (Chief Investigators: Professor Gael Martin and Dr David Frazier; International Partner Investigators: Professor Christian Robert and Professor Eric Renault).

The deadline for applications is April 28th, 2017, and the nominal starting date is July, 2017 (although there is some degree of flexibility on that front).

Albert’s block

Posted in pictures, Travel, Wines with tags , , , , , , , on February 20, 2017 by xi'an

Pitman medal for Kerrie Mengersen

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , on December 20, 2016 by xi'an

6831250-3x2-700x467My friend and co-author of many years, Kerrie Mengersen, just received the 2016 Pitman Medal, which is the prize of the Statistical Society of Australia. Congratulations to Kerrie for a well-deserved reward of her massive contributions to Australian, Bayesian, computational, modelling statistics, and to data science as a whole. (In case you wonder about the picture above, she has not yet lost the medal, but is instead looking for jaguars in the Amazon.)

This medal is named after EJG Pitman, Australian probabilist and statistician, whose name is attached to an estimator, a lemma, a measure of efficiency, a test, and a measure of comparison between estimators. His estimator is the best equivariant (or invariant) estimator, which can be expressed as a Bayes estimator under the relevant right Haar measure, despite having no Bayesian motivation to start with. His lemma is the Pitman-Koopman-Darmois lemma, which states that outside exponential families, sufficient is essentially useless (except for exotic distributions like the Uniform distributions). Darmois published the result first in 1935, but in French in the Comptes Rendus de l’Académie des Sciences. And the measure of comparison is Pitman nearness or closeness, on which I wrote a paper with my friends Gene Hwang and Bill Strawderman, paper that we thought was the final paper on the measure as it was pointing out several majors deficiencies with this concept. But the literature continued to grow after that..!