Archive for Statistical Science

Don Fraser (1925-2020)

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on December 24, 2020 by xi'an

I just received the very sad news that Don Fraser, emeritus professor of statistics at the University of Toronto, passed away this Monday, 21 December 2020. He was a giant of the field, with a unique ability for abstract modelling and he certainly pushed fiducial statistics much further than Fisher ever did. He also developed a theory of structural  inference that came close to objective Bayesian statistics, although he remained quite critical of the Bayesian approach (always in a most gentle manner, as he was a very nice man!). And most significantly contributed to high order asymptotics, to the critical analysis of ancilarity and sufficiency principles, and more beyond. (Statistical Science published a conversation with Don, in 2004, providing more personal views on his career till then.) I met with Don and Nancy rather regularly over the years, as they often attended and talked at (objective) Bayesian meetings, from the 1999 edition in Granada, to the last one in Warwick in 2019. I also remember a most enjoyable barbecue together, along with Ivar Ekeland and his family, during JSM 2018, on Jericho Park Beach, with a magnificent sunset over the Burrard Inlet. Farewell, Don!

remembering Joyce Fienberg through Steve’s words

Posted in Statistics with tags , , , , , , on October 28, 2018 by xi'an

I just learned the horrific news that Joyce Fienberg was one of the eleven people murdered yesterday morning at the Tree of Life synagogue. I had been vaguely afraid this could be the case since hearing about the shooting there, just because it was not far from the University of Pittsburgh, and CMU, but then a friend emailed me she indeed was one of the victims. When her husband Steve was on sabbatical in Paris, we met a few times for memorable dinners. I think the last time I saw her was a few years ago in a Paris hotel where Joyce, Steve and I had breakfast together to take advantage of one of their short trips to Paris. In remembrance of this wonderful woman who got assassinated by an anti-Semitic extremist, here is how Steve described their encounter in his Statistical Science interview:

I had met my wife Joyce at the University of Toronto when we were both undergraduates. I was actually working in the fall of 1963 in the registrar’s office, and on the first day the office opened to enroll people, Joyce came through. And one of the benefits about working in the registrar’s office, besides earning some spending money, was meeting all these beautiful women students passing through. That first day I made a note to ask Joyce out on a date. The next day she came through again, this time bringing through another young woman who turned out to be the daughter of friends of her parents. And I thought this was a little suspicious, but auspicious in the sense that maybe I would succeed in getting a date when I asked her. And the next day, she came through again! This time with her cousin! Then I knew that this was really going to work out. And it did. We got engaged at the end of the summer of 1964 after I graduated, but we weren’t married when I went away to graduate school. In fact, yesterday I was talking to one of the students at the University of Connecticut who was a little concerned about graduate school; it was wearing her down, and I told her I almost left after the first semester because I wasn’t sure if I was going to make a go of it, in part because I was lonely. But I did survive, and Joyce came at the end of the first year; we got married right after classes ended, and we’ve been together ever since.

Gaussian hare and Laplacian tortoise

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , , , on October 19, 2018 by xi'an

A question on X validated on the comparative merits of L¹ versus L² estimation led me to the paper of Stephen Portnoy and Roger Koenker entitled “The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators”, which I had missed at the time, despite enjoying a subscription to Statistical Science till the late 90’s.. The authors went as far as producing a parody of Granville’s Fables de La Fontaine by sticking Laplace’s and Gauss’ heads on the tortoise and the hare!

I remember rather vividly going through Steve Stigler’s account of the opposition between Laplace’s and Legendre’s approaches, when reading his History of Statistics in 1990 or 1991… Laplace defending the absolute error on the basis of the default double-exponential (or Laplace) distribution, when Legendre and then Gauss argued in favour of the squared error loss on the basis of a defaul Normal (or Gaussian) distribution. (Edgeworth later returned to the support of the L¹ criterion.) Portnoy and Koenker focus mostly on ways of accelerating the derivation of the L¹ regression estimators. (I also learned from the paper that Koenker was one of the originators of quantile regression.)

the first Bayesian

Posted in Statistics with tags , , , , , , , on February 20, 2018 by xi'an

In the first issue of Statistical Science for this year (2018), Stephen Stiegler pursues the origins of Bayesianism as attributable to Richard Price, main author of Bayes’ Essay. (This incidentally relates to an earlier ‘Og piece on that notion!) Steve points out the considerable inputs of Price on this Essay, even though the mathematical advance is very likely to be entirely Bayes’. It may however well be Price who initiated Bayes’ reflections on the matter, towards producing a counter-argument to Hume’s “On Miracles”.

“Price’s caution in addressing the probabilities of hypotheses suggested by data is rare in early literature.”

A section of the paper is about Price’s approach data-determined hypotheses and to the fact that considering such hypotheses cannot easily fit within a Bayesian framework. As stated by Price, “it would be improbable as infinite to one”. Which is a nice way to address the infinite mass prior.


amazing appendix

Posted in Books, Statistics, Travel, University life with tags , , , , , , , , , , , on February 13, 2018 by xi'an

In the first appendix of the 1995 Statistical Science paper of Besag, Green, Higdon and Mengersen, on MCMC, “Bayesian Computation and Stochastic Systems”, stands a fairly neat result I was not aware of (and which Arnaud Doucet, with his unrivalled knowledge of the literature!, pointed out to me in Oxford, avoiding me the tedium to try to prove it afresco!). I remember well reading a version of the paper in Fort Collins, Colorado, in 1993 (I think!) but nothing about this result.

It goes as follows: when running a Metropolis-within-Gibbs sampler for component x¹ of a collection of variates x¹,x²,…, thus aiming at simulating from the full conditional of x¹ given x⁻¹ by making a proposal q(x|x¹,x⁻¹), it is perfectly acceptable to use a proposal that depends on a parameter α (no surprise so far!) and to generate this parameter α anew at each iteration (still unsurprising as α can be taken as an auxiliary variable) and to have the distribution of this parameter α depending on the other variates x²,…, i.e., x⁻¹. This is the surprising part, as adding α as an auxiliary variable was messing up the update of x⁻¹. But the proof as found in the 1995 paper [page 35] does not require to consider α as such as it establishes global balance directly. (Or maybe still detailed balance when writing the whole Gibbs sampler as a cycle of Metropolis steps.) Terrific! And a whiff mysterious..!