Archive for The Bayesian Choice

a [counter]example of minimaxity

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , on December 14, 2022 by xi'an

A chance question on X validated made me reconsider about the minimaxity over the weekend. Consider a Geometric G(p) variate X. What is the minimax estimator of p under squared error loss ? I thought it could be obtained via (Beta) conjugate priors, but following Dyubin (1978) the minimax estimator corresponds to a prior with point masses at ¼ and 1, resulting in a constant estimator equal to ¾ everywhere, except when X=0 where it is equal to 1. The actual question used a penalised qaudratic loss, dividing the squared error by p(1-p), which penalizes very strongly errors at p=0,1, and hence suggested an estimator equal to 1 when X=0 and to 0 otherwise. This proves to be the (unique) minimax estimator. With constant risk equal to 1. This reminded me of this fantastic 1984 paper by Georges Casella and Bill Strawderman on the estimation of the normal bounded mean, where the least favourable prior is supported by two atoms if the bound is small enough. Figure 1 in the Negative Binomial extension by Morozov and Syrova (2022) exploits the same principle. (Nothing Orwellian there!) If nothing else, a nice illustration for my Bayesian decision theory course!

In Bayesian statistics, data is considered nonrandom…

Posted in Books, Statistics, University life with tags , , , , , on July 12, 2021 by xi'an

A rather weird question popped up on X validated, namely why does Bayesian analysis rely on a sampling distribution if the data is nonrandom. While a given sample is is indeed a deterministic object and hence nonrandom from this perspective!, I replied that on the opposite Bayesian analysis was setting the observed data as the realisation of a random variable in order to condition upon this realisation to construct a posterior distribution on the parameter. Which is quite different from calling it nonrandom! But, presumably putting too much meaning and spending too much time on this query, I remain somewhat bemused by what line of thought led to this question…

Bayes @ NYT

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , , on August 8, 2020 by xi'an

A tribune in the NYT of yesterday on the importance of being Bayesian. When an epidemiologist. Tribune that was forwarded to me by a few friends (and which I missed on my addictive monitoring of the journal!). It is written by , a Canadian journalist writing about mathematics (and obviously statistics). And it brings to the general public the main motivation for adopting a Bayesian approach, namely its coherent handling of uncertainty and its ability to update in the face of new information. (Although it might be noted that other flavours of statistical analysis are also able to update their conclusions when given more data.) The COVID situation is a perfect case study in Bayesianism, in that there are so many levels of uncertainty and imprecision, from the models themselves, to the data, to the outcome of the tests, &tc. The article is journalisty, of course, but it quotes from a range of statisticians and epidemiologists, including Susan Holmes, whom I learned was quarantined 105 days in rural Portugal!, developing a hierarchical Bayes modelling of the prevalent  SEIR model, and David Spiegelhalter, discussing Cromwell’s Law (or better, humility law, for avoiding the reference to a fanatic and tyrannic Puritan who put Ireland to fire and the sword!, and had in fact very little humility for himself). Reading the comments is both hilarious (it does not take long to reach the point when Trump is mentioned, and Taleb’s stance on models and tails makes an appearance) and revealing, as many readers do not understand the meaning of Bayes’ inversion between causes and effects, or even the meaning of Jeffreys’ bar, |, as conditioning.

ENSEA & CISEA 2019

Posted in Books, pictures, Statistics, Travel, University life with tags , , , , , , , , , , , on June 26, 2019 by xi'an

I found my (short) trip to Abdijan for the CISEA 2019 conference quite fantastic as it allowed me to meet with old friends, from the earliest days at CREST and even before, and to meet new ones. Including local students of ENSEA who had taken a Bayesian course out of my Bayesian Choice book. And who had questions about the nature of priors and the difficulty they had in accepting that several replies were possible with the same data! I wish I had had more time to discuss the relativity of Bayesian statements with them but this was a great and rare opportunity to find avid readers of my books! I also had a long chat with another student worried about the use or mis-use of reversible jump algorithms to draw inference  on time-series models in Bayesian Essentials, chat that actually demonstrated his perfect understanding of the matter. And it was fabulous to meet so many statisticians and econometricians from West Africa, most of them French-speaking. My only regret is not having any free time to visit Abidjan or the neighbourhood as the schedule of the conference did not allow for it [or even for a timely posting of a post!], especially as it regularly ran overtime. (But it did provide for a wide range of new local dishes that I definitely enjoyed tasting!) We are now discussing further opportunities to visit there, e.g. by teaching a short course at the Master or PhD levels.

from tramway to Panzer (or back!)…

Posted in Books, pictures, Statistics with tags , , , , , , on June 14, 2019 by xi'an

Although it is usually presented as the tramway problem, namely estimating the number of tram or bus lines in a city given observing one line number, including The Bayesian Choice by yours truly, the original version of the problem is about German tanks, Panzer V tanks to be precise, which total number M was to be estimated by the Allies from their observation of serial numbers of a number k of tanks. The Riddler is restating the problem when the only available information is made of the smallest, 22, and largest, 144, numbers, with no information about the number k itself. I am unsure what the Riddler means by “best” estimate, but a posterior distribution on M (and k) can be certainly be constructed for a prior like 1/k x 1/M² on (k,M). (Using M² to make sure the posterior mean does exist.) The joint distribution of the order statistics is

\frac{k!}{(k-2)!} M^{-k} (144-22)^{k-2}\, \Bbb I_{2\le k\le M\ge 144}

which makes the computation of the posterior distribution rather straightforward. Here is the posterior surface (with an unfortunate rendering of an artefactual horizontal line at 237!), showing a concentration near the lower bound M=144. The posterior mode is actually achieved for M=144 and k=7, while the posterior means are (rounded as) M=169 and k=9.