**T**his year, the French Statistical Society (SFDS) Prix Laplace has been granted to Luc Devroye, author of the Non-Uniform Random Generation bible. among many achievements!, prize that he will receive during the 2019 meeting in Nancy, this very week.

## Archive for Canada

## non-uniform Laplace generation

Posted in Books, pictures, Statistics, University life with tags Bible code, Canada, France, McGill University, Montréal, Nancy, Non-Uniform Random Variate Generation, prize, Québec, SFDS on June 5, 2019 by xi'an## efficiency and the Fréchet-Darmois-Cramèr-Rao bound

Posted in Books, Kids, Statistics with tags Académie des Sciences, best unbiased estimator, Canada, Canadian Journal of Statistics, Cramer-Rao lower bound, cross validated, efficiency, Fréchet-Darmois-Cramèr-Rao bound, George Darmois, James-Stein estimator, mathematical statistics, Maurice Fréchet on February 4, 2019 by xi'an**F**ollowing some entries on X validated, and after grading a mathematical statistics exam involving Cramèr-Rao, or Fréchet-Darmois-Cramèr-Rao to include both French contributors pictured above, I wonder as usual at the relevance of a concept of *efficiency* outside [and even inside] the restricted case of unbiased estimators. The general (frequentist) version is that the variance of an estimator δ of [any transform of] θ with bias b(θ) is

I(θ)⁻¹ (1+b'(θ))²

while a Bayesian version is the van Trees inequality on the integrated squared error loss

(E(I(θ))+I(π))⁻¹

where I(θ) and I(π) are the Fisher information and the prior entropy, respectively. But this opens a whole can of worms, in my opinion since

- establishing that a given estimator is efficient requires computing both the bias and the variance of that estimator, not an easy task when considering a Bayes estimator or even the James-Stein estimator. I actually do not know if any of the estimators dominating the standard Normal mean estimator has been shown to be efficient (although there exist results for closed form expressions of the James-Stein estimator quadratic risk, including one of mine the Canadian Journal of Statistics published verbatim in 1988). Or is there a result that a Bayes estimator associated with the quadratic loss is by default efficient in either the first or second sense?
- while the initial Fréchet-Darmois-Cramèr-Rao bound is restricted to unbiased estimators (i.e., b(θ)≡0) and unable to produce efficient estimators in all settings but for the natural parameter in the setting of exponential families, moving to the general case means there exists one efficiency notion for every bias function b(θ), which makes the notion quite weak, while not necessarily producing efficient estimators anyway, the major impediment to taking this notion seriously;
- moving from the variance to the squared error loss is not more “natural” than using any [other] convex combination of variance and squared bias, creating a whole new class of optimalities (a grocery of cans of worms!);
- I never got into the van Trees inequality so cannot say much, except that the comparison between various priors is delicate since the integrated risks are against different parameter measures.

## ISBA World meetings to come

Posted in pictures, Statistics, Travel, University life with tags Canada, China, conference, ISBA, Italy, Montréal, Venezia, world meeting on January 27, 2019 by xi'an## Longhand [Pinot Grigio]

Posted in Statistics with tags British Columbia, Canada, Longhand, Okanagan Valley, pinot gris, Vancouver Island on January 20, 2019 by xi'an## Juno Beach [jatp]

Posted in pictures, Travel with tags Abbaye d'Arbenne massacre, Caen, Canada, Canadian troops, Colin Gibson, Courseulles, D Day, D-Day beaches, June 1944, Juno Beach, Juno Park, memorial, Normandy, scultpture, Second World War, war prisonner, WWII on January 2, 2019 by xi'an## a question from McGill about The Bayesian Choice

Posted in Books, pictures, Running, Statistics, Travel, University life with tags Bayes factors, Bayesian hypothesis testing, Canada, cross validated, improper prior, McGill University, Montréal, posterior probability on December 26, 2018 by xi'an**I** received an email from a group of McGill students working on Bayesian statistics and using The Bayesian Choice (although the exercise pictured below is not in the book, the closest being exercise 1.53 inspired from Raiffa and Shlaiffer, 1961, and exercise 5.10 as mentioned in the email):

There was a question that some of us cannot seem to decide what is the correct answer. Here are the issues,

Some people believe that the answer to both is ½, while others believe it is 1. The reasoning for ½ is that since Beta is a continuous distribution, we never could have θ exactly equal to ½. Thus regardless of α, the probability that θ=½ in that case is 0. Hence it is ½. I found a related stack exchange question that seems to indicate this as well.

The other side is that by Markov property and mean of Beta(a,a), as α goes to infinity , we will approach ½ with probability 1. And hence the limit as α goes to infinity for both (a) and (b) is 1. I think this also could make sense in another context, as if you use the Bayes factor representation. This is similar I believe to the questions in the Bayesian Choice, 5.10, and 5.11.

As it happens, the answer is ½ in the first case (a) because π(H⁰) is ½ regardless of α and 1 in the second case (b) because the evidence against H⁰ goes to zero as α goes to zero *(watch out!)*, along with the mass of the prior on any compact of (0,1) since Γ(2α)/Γ(α)². (The limit does not correspond to a proper prior and hence is somewhat meaningless.) However, when α goes to infinity, the evidence against H⁰ goes to infinity and the posterior probability of ½ goes to zero, despite the prior under the alternative being more and more concentrated around ½!