**I** just received the very sad news that Don Fraser, emeritus professor of statistics at the University of Toronto, passed away this Monday, 21 December 2020. He was a giant of the field, with a unique ability for abstract modelling and he certainly pushed fiducial statistics much further than Fisher ever did. He also developed a theory of structural inference that came close to objective Bayesian statistics, although he remained quite critical of the Bayesian approach (always in a most gentle manner, as he was a very nice man!). And most significantly contributed to high order asymptotics, to the critical analysis of ancilarity and sufficiency principles, and more beyond. (Statistical Science published a conversation with Don, in 2004, providing more personal views on his career till then.) I met with Don and Nancy rather regularly over the years, as they often attended and talked at (objective) Bayesian meetings, from the 1999 edition in Granada, to the last one in Warwick in 2019. I also remember a most enjoyable barbecue together, along with Ivar Ekeland and his family, during JSM 2018, on Jericho Park Beach, with a magnificent sunset over the Burrard Inlet. Farewell, Don!

## Archive for asymptotics

## Don Fraser (1925-2020)

Posted in Books, Statistics, University life with tags asymptotics, Canada, David Cox, Don Fraser, fiducial inference, fiducial statistics, John Nelder, Nancy Reid, O'Bayes 2019, obituary, Ontario, R.A. Fisher, Statistical Science, University of Toronto, University of Warwick, University of Waterloo on December 24, 2020 by xi'an## essentials of probability theory for statisticians

Posted in Books, Kids, pictures, Statistics, Travel, University life with tags asymptotics, book review, central limit theorem, CHANCE, Cydonia, face on Mars, Glivenko-Cantelli Theorem, Henri Lebesgue, Lebesque integration, measure theory, pareidolia, probability theory, quincunx on April 25, 2020 by xi'an**O**n yet another confined sunny lazy Sunday morning, I read through Proschan and Shaw’s Essentials of Probability Theory for Statisticians, a CRC Press book that was sent to me quite a while ago for review. The book was indeed published in 2016. Before moving to serious things, let me evacuate the customary issue with the cover. I have trouble getting the point of the “face on Mars” being adopted as the cover of a book on probability theory (rather than a book on, say, pareidolia). There is a brief paragraph on post-facto probability calculations, stating how meaningless the question of the probability of this shade appearing on a Viking Orbiter picture by “chance”, but this is so marginal I would have preferred any other figure from the book!

The book plans to cover the probability essentials for dealing with graduate level statistics and in particular convergence, conditioning, and paradoxes following from using non-rigorous approaches to probability. A range that completely fits my own prerequisite for statistics students in my classes and that of course involves the recourse to (Lebesgue) measure theory. And a goal that I find both commendable and comforting as my past experience with exchange students led me to the feeling that rigorous probability theory was mostly scrapped from graduate programs. While the book is not extremely formal, it provides a proper motivation for the essential need of measure theory to handle the complexities of statistical analysis and in particular of asymptotics. It thus relies as much as possible on examples that stem from or relate to statistics, even though most examples may appear as standard to senior readers. For instance the consistency of the sample median or a weak version of the Glivenko-Cantelli theorem. The final chapter is dedicated to applications (in the probabilist’ sense!) that emerged from statistical problems. I felt these final chapters were somewhat stretched compared with what they could have been, as for instance with the multiple motivations of the conditional expectation, but this simply makes for more material. If I had to teach this material to students, I would certainly rely on the book! in particular because of the repeated appearances of the quincunx for motivating non-Normal limites. (A typo near Fatou’s lemma missed the dominating measure. And I did not notice the Riemann notation *dx* being extended to the measure in a formal manner.)

*[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Books Review section in CHANCE.]*

## sampling-importance-resampling is not equivalent to exact sampling [triste SIR]

Posted in Books, Kids, Statistics, University life with tags asymptotics, cross validated, importance sampling, infinite variance estimators, sampling w/o replacement, self-normalised importance sampling, SIR on December 16, 2019 by xi'an**F**ollowing an X validated question on the topic, I reassessed a previous impression I had that sampling-importance-resampling (SIR) is equivalent to direct sampling for a given sample size. (As suggested in the above fit between a N(2,½) target and a N(0,1) proposal.) Indeed, when one produces a sample

and resamples with replacement from this sample using the importance weights

the resulting sample

is neither “i.” nor “i.d.” since the resampling step involves a self-normalisation of the weights and hence a global bias in the evaluation of expectations. In particular, if the importance function g is a poor choice for the target f, meaning that the exploration of the whole support is imperfect, if possible (when both supports are equal), a given sample may well fail to reproduce the properties of an iid example ,as shown in the graph below where a Normal density is used for g while f is a Student t⁵ density:

## asymptotics of M³C²L

Posted in Statistics with tags asymptotics, fog, maximum likelihood estimation, M³C²L, Monte Carlo Statistical Methods, Pacific North West, Tofino, Vancouver Island on August 19, 2018 by xi'an**I**n a recent arXival, Blazej Miasojedow, Wojciech Niemiro and Wojciech Rejchel establish the convergence of a maximum likelihood estimator based on an MCMC approximation of the likelihood function. As in intractable normalising constants. The main result in the paper is a Central Limit theorem for the M³C²L estimator that incorporates an additional asymptotic variance term for the Monte Carlo error. Where both the sample size n and the number m of simulations go to infinity. Independently so. However, I do not fully perceive the relevance of using an MCMC chain to target an importance function [which is used in the approximation of the normalising constant or otherwise for the intractable likelihood], relative to picking an importance function h(.) that can be directly simulated.

## a paradox about likelihood ratios?

Posted in Books, pictures, Statistics, University life with tags arXiv, asymptotics, Betteridge's law of headlines, Gaussian model, likelihood ratio, paradoxes, St John's College, University of Oxford on January 15, 2018 by xi'an**A**ware of my fascination for paradoxes (and heterodox publications), Ewan Cameron sent me the link to a recent arXival by Louis Lyons (Oxford) on different asymptotic distributions of the likelihood ratio. Which is full of approximations. The overall point of the note is hard to fathom… Unless it simply plans to illustrate Betteridge’s law of headlines, as suggested by Ewan.

For instance, the limiting distribution of the log-likelihood of an exponential sample at the true value of the parameter τ is not asymptotically Gaussian but almost surely infinite. While the log of the (Wilks) likelihood ratio at the true value of τ is truly (if asymptotically) a Χ² variable with one degree of freedom. That it is not a Gaussian is deemed a “paradox” by the author, explained by a cancellation of first order terms… Same thing again for the common Gaussian mean problem!