**P**h.D. students at UCL Statistics have made this Xmas tree out of bound and unbound volumes of statistics journals, not too hard to spot (especially the Current Indexes which I abandoned when I left my INSEE office a few years ago). An invisible present under the tree is the opening of several positions, namely two permanent lectureships and two three-year research fellowships, all in Statistics or Applied Probability, with the fellowship deadline being the 1st of December 2019!

## Archive for IMS

## Xmas tree at UCL, with a special gift

Posted in Books, pictures, Statistics, Travel, University life with tags academic position, Annals of Applied Statistics, Annals of Probability, Annals of Statistics, Britain, Current Index to Statistics, fellowships, IMS, London, Statistics, The BUGS book, UCL, University College London, Xmas on November 26, 2019 by xi'an## Lawrence D. Brown PhD Student Award

Posted in Kids, Statistics, Travel, University life with tags IMS, IMS Annual Meeting, IMS Lawrence D. Brown PhD Student Award, JSM, Larry Brown, travel award, Wharton Business School on June 3, 2019 by xi'an*[Reproduced from the IMS Bulletin, an announcement of a travel award for PhD students in celebration of my friend Larry Brown!]*

Lawrence D. Brown (1940-2018), Miers Busch Professor and Professor of Statistics at The Wharton School, University of Pennsylvania, had a distinguished academic career with groundbreaking contributions to a range of fields in theoretical and applied statistics. He was an IMS Fellow, IMS Wald Lecturer, and a former IMS President. Moreover, he was an enthusiastic and dedicated mentor to many graduate students. In 2011, he was recognized for these efforts as a recipient of the Provost’s Award for Distinguished PhD Teaching and Mentoring at the University of Pennsylvania.

Brown’s firm dedication to all three pillars of academia — research, teaching and service — sets an exemplary model for generations of new statisticians. Therefore, the IMS is introducing a new award for PhD students created in his honor: the IMS Lawrence D. Brown PhD Student Award.

This annual travel award will be given to three PhD students, who will present their research at a special invited session during the IMS Annual Meeting. The submission process is now open and applications are due by July 15th, 2019 for the 2020 award. More details, including eligibility and application requirements, can be found at: https://www.imstat.org/ims-awards/ims-lawrence-d-brown-ph-d-student-award/

Donations are welcome as well, through https://www.imstat.org/contribute-to-the-ims/ under “IMS Lawrence D. Brown Ph.D. Student Award Fund”

## bootstrap in Nature

Posted in Statistics with tags ASA, bootstrap, Brad Efron, empirical Bayes methods, IMS, International Prize in Statistics, ISI, Nature, Nobel Prize, RSS, Stanford on December 29, 2018 by xi'an**A** news item in the latest issue of Nature I received about Brad Efron winning the “Nobel Prize of Statistics” this year. The bootstrap is certainly an invention worth the recognition, not to mention Efron’s contribution to empirical Bayes analysis,, even though I remain overall reserved about the very notion of a Nobel prize in any field… With an appropriate XXL quote, who called the bootstrap method the ‘best statistical pain reliever ever produced’!

## approximate likelihood perspective on ABC

Posted in Books, Statistics, University life with tags ABC, Approximate Bayesian computation, approximate likelihood, curse of dimensionality, g-and-k distributions, Gibbs sampling, IMS, MCqMC 2018, mixed effect models, Potts model, Statistics Surveys, summary statistics, survey, tolerance, winference on December 20, 2018 by xi'an**G**eorge Karabatsos and Fabrizio Leisen have recently published in Statistics Surveys a fairly complete survey on ABC methods [which earlier arXival I had missed]. Listing within an extensive bibliography of 20 pages some twenty-plus earlier reviews on ABC (with further ones in applied domains)!

*“(…) any ABC method (algorithm) can be categorized as either (1) rejection-, (2) kernel-, and (3) coupled ABC; and (4) synthetic-, (5) empirical- and (6) bootstrap-likelihood methods; and can be **combined with classical MC or VI algorithms [and] all 22 reviews of ABC methods have covered rejection and kernel ABC methods, but only three covered synthetic likelihood, one reviewed the empirical likelihood, and none have reviewed coupled ABC and bootstrap likelihood methods.”*

The motivation for using approximate likelihood methods is provided by the examples of g-and-k distributions, although the likelihood can be efficiently derived by numerical means, as shown by Pierre Jacob‘s winference package, of mixed effect linear models, although a completion by the mixed effects themselves is available for Gibbs sampling as in Zeger and Karim (1991), and of the hidden Potts model, which we covered by pre-processing in our 2015 paper with Matt Moores, Chris Drovandi, Kerrie Mengersen. The paper produces a general representation of the approximate likelihood that covers the algorithms listed above as through the table below (where t(.) denotes the summary statistic):

The table looks a wee bit challenging simply because the review includes the synthetic likelihood approach of Wood (2010), which figured preeminently in the 2012 Read Paper discussion but opens the door to all kinds of approximations of the likelihood function, including variational Bayes and non-parametric versions. After a description of the above versions (including a rather ignored coupled version) and the special issue of ABC model choice, the authors expand on the difficulties with running ABC, from multiple tuning issues, to the genuine curse of dimensionality in the parameter (with unnecessary remarks on low-dimension sufficient statistics since they are almost surely inexistent in most realistic settings), to the mis-specified case (on which we are currently working with David Frazier and Judith Rousseau). To conclude, an worthwhile update on ABC and on the side a funny typo from the reference list!

Li, W. and Fearnhead, P. (2018, in press). On the asymptotic efficiency

of approximate Bayesian computation estimators.Biometrikanana-na.

## Le Monde puzzle [#1650]

Posted in Books, Kids, R with tags Alice and Bob, competition, IMS, Le Monde, mathematical puzzle, NUS, partition, R, simulated annealing, Singapore on September 5, 2018 by xi'an**A** penultimate Le Monde mathematical puzzle before the new competition starts [again!]

For a game opposing 40 players over 12 questions, anyone answering correctly a question gets as reward the number of people who failed to answer. Alice is the single winner: what is her minimal score? In another round, Bob is the only lowest grade: what is his maximum score?

For each player, the score S is the sum δ¹s¹+…+δ⁸s⁸, where the first term is an indicator for a correct answer and the second term is the sum over all other players of their complementary indicator, which can be replaced with the sum over all players since δ¹(1-δ¹)=0. Leading to the vector of scores

worz <- function(ansz){ scor=apply(1-ansz,2,sum) return(apply(t(ansz)*scor,2,sum))}

Now, running by brute-force a massive number of simulations confirmed my intuition that the minimal winning score is 39, the number of players minus one [achieved by Alice giving a single good answer and the others none at all], while the maximum loosing score appeared to be 34, for which I had much less of an intuition! I would have rather guessed something in the vicinity of 80 (being half of the answers replied correctly by half of the players)… Indeed, while in SIngapore, I however ran in the wee hours a quick simulated annealing code from this solution and moved to 77.

And the 2018 version of Le Monde maths puzzle competition starts today!, for a total of eight double questions, starting with an optimisation problem where the adjacent X table is filled with zeros and ones, trying to optimise (max and min) the number of *positive* entries [out of 45] for which an even number of neighbours is equal to one. On the represented configuration, green stands for one (16 ones) and P for the positive entries (31 of them). This should be amenable to a R resolution [R solution], by, once again!, simulated annealing. Deadline for the reply on the competition website is next Tuesday, midnight [UTC+1]

## IMS workshop [day 3]

Posted in pictures, R, Statistics, Travel, University life with tags Bayesian computation, Birch, delayed simulation, high dimensions, hypocoercivity, IMS, Institute for Mathematical Sciences, Lapland, MCqMC 2018, National University Singapore, non-reversible diffusion, NUS, ODE, partly deterministic processes, probabilistic programming, Rao-Blackwellisation, Rennes, Singapore, Wang-Landau algorithm, workshop on August 30, 2018 by xi'an**I** made the “capital” mistake of walking across the entire NUS campus this morning, which is quite green and pretty, but which almost enjoys an additional dimension brought by such an intense humidity that one feels having to get around this humidity!, a feature I have managed to completely erase from my memory of my previous visit there. Anyway, nothing of any relevance. oNE talk in the morning was by Markus Eisenbach on tools used by physicists to speed up Monte Carlo methods, like the Wang-Landau flat histogram, towards computing the partition function, or the distribution of the energy levels, definitely addressing issues close to my interest, but somewhat beyond my reach for using a different language and stress, as often in physics. (I mean, as often in physics talks I attend.) An idea that came out clear to me was to bypass a (flat) histogram target and aim directly at a constant slope cdf for the energy levels. (But got scared away by the Fourier transforms!)

Lawrence Murray then discussed some features of the Birch probabilistic programming language he is currently developing, especially a fairly fascinating concept of delayed sampling, which connects with locally-optimal proposals and Rao Blackwellisation. Which I plan to get back to later [and hopefully sooner than later!].

In the afternoon, Maria de Iorio gave a talk about the construction of nonparametric priors that create dependence between a sequence of functions, a notion I had not thought of before, with an array of possibilities when using the stick breaking construction of Dirichlet processes.

And Christophe Andrieu gave a very smooth and helpful entry to partly deterministic Markov processes (PDMP) in preparation for talks he is giving next week for the continuation of the workshop at IMS. Starting with the guided random walk of Gustafson (1998), which extended a bit later into the non-reversible paper of Diaconis, Holmes, and Neal (2000). Although I had a vague idea of the contents of these papers, the role of the velocity **ν** became much clearer. And premonitory of the advances made by the more recent PDMP proposals. There is obviously a continuation with the equally pedagogical talk Christophe gave at MCqMC in Rennes two months [and half the globe] ago, but the focus being somewhat different, it really felt like a new talk [my short term memory may also play some role in this feeling!, as I now remember the discussion of Hilderbrand (2002) for non-reversible processes]. An introduction to the topic I would recommend to anyone interested in this new branch of Monte Carlo simulation! To be followed by the most recently arXived hypocoercivity paper by Christophe and co-authors.

## IMS workshop [day 2]

Posted in pictures, Statistics, Travel with tags ABC, dawn, Engineering, equator, heat, humidity, IMS, Institute for Mathematical Sciences, National University Singapore, NUS, prediction, Singapore, workshop on August 29, 2018 by xi'an**H**ere are the slides of my talk today on using Wasserstein distances as an intrinsic distance measure in ABC, as developed in our papers with Espen Bernton, Pierre Jacob, and Mathieu Gerber:

This morning, Gael Martin discussed the surprising aspects of ABC prediction, expanding upon her talk at ISBA, with several threads very much worth weaving in the ABC tapestry, one being that summary statistics need be used to increase the efficiency of the prediction, as well as more adapted measures of distance. Her talk also led me ponder about the myriad of possibilities available or not in the most generic of ABC predictions (which is not the framework of Gael’s talk). If we imagine a highly intractable setting, it may be that the marginal generation of a predicted value at time t+1 requires the generation of the entire past from time 1 till time t. Possibly because of a massive dependence on latent variables. And the absence of particle filters. if this makes any sense. Therefore, based on a generated parameter value θ it may be that the entire series needs be simulated to reach the last value in the series. Even when unnecessary this may be an alternative to conditioning upon the actual series. In this later case, comparing both predictions may act as a natural measure of distance since one prediction is a function or statistic of the actual data while the other is a function of the simulated data. Another direction I mused about is the use of (handy) auxiliary models, each producing a prediction as a new statistic, which could then be merged and weighted (or even selected) by a random forest procedure. Again, if the auxiliary models are relatively well-behaved, timewise, this would be quite straightforward to implement.