Archive for fiducial inference

day one at ISBA 22

Posted in pictures, Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , , , , , on June 29, 2022 by xi'an

Started the day with a much appreciated swimming practice in the [alas warm⁺⁺⁺] outdoor 50m pool on the Island with no one but me in the slooow lane. And had my first ride with the biXi system, surprised at having to queue behind other bikes at red lights! More significantly, it was a great feeling to reunite at last with so many friends I had not met for more than two years!!!

My friend Adrian Raftery gave the very first plenary lecture on his work on the Bayesian approach to long-term population projections, which was recently  a work censored by some US States, then counter-censored by the Supreme Court [too busy to kill Roe v. Wade!]. Great to see the use of Bayesian methods validated by the UN Population Division [with at least one branch of the UN

Stephen Lauritzen returning to de Finetti notion of a model as something not real or true at all, back to exchangeability. Making me wonder when exchangeability is more than a convenient assumption leading to the Hewitt-Savage theorem. And sufficiency. I mean, without falling into a Keynesian fallacy, each point of the sample has unique specificities that cannot be taken into account in an exchangeable model. Nice to hear some measure theory, though!!! Plus a comment on the median never being sufficient, recouping an older (and presumably not original) point of mine. Stephen’s (or Fisher’s?) argument being that the median cannot be recursively computed!

Antonietta Mira and I had our ABC session this afternoon with Cecilia Viscardi, Sirio Legramanti, and Massimiliano Tamborino (Warwick) as speakers. Cecilia linked ABC with normalising flows, in collaboration with Dennis Prangle (whose earlier paper on this connection was presented as the first One World ABC seminar). Thus using past simulations to approximate the posterior by a neural network, possibly with a significant increase in computing time when compared with more rudimentary SMC-ABC methods in larger dimensions. Sirio considered summary-free ABC based on discrepancies like Rademacher complexity. Which more or less contains MMD, Kullback-Leibler, Wasserstein and more, although it seems to be dependent on the parameterisation of the observations. An interesting opening at the end was that this approach could apply to non iid settings. Massi presented a paper coauthored with Umberto that had just been arXived. On sequential ABC with a dependence on the summary statistic (hence guided). Further bringing copulas into the game, although this forces another choice [for the marginals] in the method.

Tamara Broderick talked about a puzzling leverage effect of some observations in economic studies where a tiny portion of individuals may modify the significance or the sign of a coefficient, for which I cannot tell whether the data or the reliance on statistical significance are to blame. Robert Kohn presented mixture-of-Gaussian copulas [not to be confused with mixture of Gaussian-copulas!] and Nancy Reid concluded my first [and somewhat exhausting!] day at ISBA with a BFF talk on the different statistical paradigms take on confidence (for which the notion of calibration seems to remain frequentist).

Side comments: First, most people in the conference are wearing masks, which is great! Also, I find it hard to read slides from the screen, which I presume is an age issue (?!) Even more aside, I had Korean lunch in a place that refused to serve me a glass of water, which I find amazing.

statistics for making decisions [book review]

Posted in Statistics, Books with tags , , , , , , , , , , , , on March 7, 2022 by xi'an

I bought this book [or more precisely received it from CRC Press as a ({prospective} book) review reward] as I was interested in the author’s perspectives on actual decision making (and unaware of the earlier Statistical Decision Theory book he had written in 2013). It is intended for a postgraduate semester course and  “not for a beginner in statistics”. Exercises with solutions are included in each chapter (with some R codes in the solutions). From Chapter 4 onwards, the “Further reading suggestions” are primarily referring to papers and books written by the author, as these chapters are based on his earlier papers.

“I regard hypothesis testing as a distraction from and a barrier to good statistical practice. Its ritualised application should be resisted from the position of strength, by being well acquainted with all its theoretical and practical aspects. I very much hope (…) that the right place for hypothesis testing is in a museum, next to the steam engine.”

The first chapter exposes the shortcomings of hypothesis testing for conducting decision making, in particular by ignoring the consequences of the decisions. A perspective with which I agree, but I fear the subsequent developments found in the book remain too formalised to be appealing, reverting to the over-simplification found in Neyman-Pearson theory. The second chapter is somewhat superfluous for a book assuming a prior exposure to statistics, with a quick exposition of the frequentist, Bayesian, and … fiducial paradigms. With estimators being first defined without referring to a specific loss function. And I find the presentation of the fiducial approach rather shaky (if usual). Esp. when considering fiducial perspective to be used as default Bayes in the subsequent chapters. I also do not understand the notation (p.31)

P(\hat\theta<c;\,\theta\in\Theta_\text{H})

outside of a Bayesian (or fiducial?) framework. (I did not spot typos aside from the traditional “the the” duplicates, with at least six occurences!)

The aforementioned subsequent chapters are not particularly enticing as they cater to artificial loss functions and engage into detailed derivations that do not seem essential. At times they appear to be nothing more than simple calculus exercises. The very construction of the loss function, which I deem critical to implement statistical decision theory, is mostly bypassed. The overall setting is also frighteningly unidimensional. In the parameter, in the statistic, and in the decision. Covariates only appear in the final chapter which appears to have very little connection with decision making in that the loss function there is the standard quadratic loss, used to achieve the optimal composition of estimators, rather than selecting the best model. The book is also missing in practical or realistic illustrations.

“With a bit of immodesty and a tinge of obsession, I would like to refer to the principal theme of this book as a paradigm, ascribing to it as much importance and distinction as to the frequentist and Bayesian paradigms”

The book concludes with a short postscript (pp.247-249) reproducing the introducing paragraphs about the ill-suited nature of hypothesis testing for decision-making. Which would have been better supported by a stronger engagement into elicitating loss functions and quantifying the consequences of actions from the clients…

[Disclaimer about potential self-plagiarism: this post or an edited version will eventually appear in my Book Review section in CHANCE.]

Fisher, Bayes, and predictive Bayesian inference [seminar]

Posted in Statistics with tags , , , , , , , , , on April 4, 2021 by xi'an

An interesting Foundations of Probability seminar at Rutgers University this Monday, at 4:30ET, 8:30GMT, by Sandy Zabell (the password is Angelina’s birthdate):

R. A. Fisher is usually perceived to have been a staunch critic of the Bayesian approach to statistics, yet his last book (Statistical Methods and Scientific Inference, 1956) is much closer in spirit to the Bayesian approach than the frequentist theories of Neyman and Pearson.  This mismatch between perception and reality is best understood as an evolution in Fisher’s views over the course of his life.  In my talk I will discuss Fisher’s initial and harsh criticism of “inverse probability”, his subsequent advocacy of fiducial inference starting in 1930, and his admiration for Bayes expressed in his 1956 book.  Several of the examples Fisher discusses there are best understood when viewed against the backdrop of earlier controversies and antagonisms.

Don Fraser (1925-2020)

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on December 24, 2020 by xi'an

I just received the very sad news that Don Fraser, emeritus professor of statistics at the University of Toronto, passed away this Monday, 21 December 2020. He was a giant of the field, with a unique ability for abstract modelling and he certainly pushed fiducial statistics much further than Fisher ever did. He also developed a theory of structural  inference that came close to objective Bayesian statistics, although he remained quite critical of the Bayesian approach (always in a most gentle manner, as he was a very nice man!). And most significantly contributed to high order asymptotics, to the critical analysis of ancilarity and sufficiency principles, and more beyond. (Statistical Science published a conversation with Don, in 2004, providing more personal views on his career till then.) I met with Don and Nancy rather regularly over the years, as they often attended and talked at (objective) Bayesian meetings, from the 1999 edition in Granada, to the last one in Warwick in 2019. I also remember a most enjoyable barbecue together, along with Ivar Ekeland and his family, during JSM 2018, on Jericho Park Beach, with a magnificent sunset over the Burrard Inlet. Farewell, Don!

rare ABC [webinar impressions]

Posted in Books, Statistics, Travel, University life with tags , , , , , , , on April 28, 2020 by xi'an

A second occurrence of the One World ABC seminar by Ivis Kerama, and Richard Everitt (Warwick U), on their on-going pape with and Tom Thorne, Rare Event ABC-SMC², which is not about rare event simulation but truly about ABC improvement. Building upon a previous paper by Prangle et al. (2018). And also connected with Dennis’ talk a fortnight ago in that it exploits an autoencoder representation of the simulated outcome being H(u,θ). It also reminded me of an earlier talk by Nicolas Chopin.

This approach avoids using summary statistics (but relies on a particular distance) and implements a biased sampling of the u’s to produce outcomes more suited to the observation(s). Almost sounds like a fiducial ABC! Their stopping rule for decreasing the tolerance is to spot an increase in the variance of the likelihood estimates. As the method requires many data generations for a single θ, it only applies in certain settings. The ABC approximation is indeed used as an estimation of likelihood ratio (which makes sense for SMC² but is biased because of ABC). I got slightly confused during Richard’s talk by his using the term of unbiased estimator of the likelihood before I realised he was talking of the ABC posterior. Thanks to both speakers, looking forward the talk by Umberto Picchini in a fortnight (on a joint paper with Richard).

%d bloggers like this: