Archive for Hellinger loss

risk-adverse Bayes estimators

Posted in Books, pictures, Statistics with tags , , , , , , , , , , on January 28, 2019 by xi'an

An interesting paper came out on arXiv in early December, written by Michael Brand from Monash. It is about risk-adverse Bayes estimators, which are defined as avoiding the use of loss functions (although why avoiding loss functions is not made very clear in the paper). Close to MAP estimates, they bypass the dependence of said MAPs on parameterisation by maximising instead π(θ|x)/√I(θ), which is invariant by reparameterisation if not by a change of dominating measure. This form of MAP estimate is called the Wallace-Freeman (1987) estimator [of which I never heard].

The formal definition of a risk-adverse estimator is still based on a loss function in order to produce a proper version of the probability to be “wrong” in a continuous environment. The difference between estimator and true value θ, as expressed by the loss, is enlarged by a scale factor k pushed to infinity. Meaning that differences not in the immediate neighbourhood of zero are not relevant. In the case of a countable parameter space, this is essentially producing the MAP estimator. In the continuous case, for “well-defined” and “well-behaved” loss functions and estimators and density, including an invariance to parameterisation as in my own intrinsic losses of old!, which the author calls likelihood-based loss function,  mentioning f-divergences, the resulting estimator(s) is a Wallace-Freeman estimator (of which there may be several). I did not get very deep into the study of the convergence proof, which seems to borrow more from real analysis à la Rudin than from functional analysis or measure theory, but keep returning to the apparent dependence of the notion on the dominating measure, which bothers me.

O’Bayes 2013

Posted in Statistics, Travel, University life, Wines with tags , , , , , , , on December 17, 2013 by xi'an

IMG_2188It was quite sad that we had to start the O-Bayes 2103 conference with the news that Dennis Lindley passed away, but the meeting is the best opportunity to share memories and stress his impact on the field. This is what happened yesterday in and around the talks. The conference(s) is/are very well-attended, with 200-some participants in total, and many young researchers. As in the earlier meetings, the talks are a mixture of “classical” objective Bayes and non-parametric Bayes (my own feeling being of a very fuzzy boundary between both perspectives, both relying to some extent on asymptotics for validation). I enjoyed in particular Jayanta’s Ghosh talk on the construction of divergence measures for reference priors that would necessarily lead to the Jeffreys prior. With the side open problem of determining whether there are only three functional distances (Hellinger, Kullback and L1 that are independent of the dominating measure. (Upon reflection, I am not sure about this question and whether I got it correctly, as one can always use the prior π as the dominating measure and look at divergences of the form

J(\pi) = \int d\left(\dfrac{\text{d}\pi(\cdot|x)}{\text{d}\pi(\cdot)}\right) m(x)\text{d}x

which seems to open up the range of possible d’s…) However, and in the great tradition of Bayesian meetings, the best part of the day was the poster session. From enjoying a (local) beer with old friends to discussing points and details.  (It is just unfortunate that by 8:15 I was simply sleeping on my feet and could not complete my round of O’Bayes posters, not even mentioning EFaB posters that sounded equally attractive… I even missed discussing around a capture-recapture poster!)

an unbiased estimator of the Hellinger distance?

Posted in Statistics with tags , , , on October 22, 2012 by xi'an

Here is a question I posted on Stack Exchange a while ago:

In a setting where one observes X1,…,Xn distributed from a distribution with (unknown) density f, I wonder if there is an unbiased estimator (based on the Xi‘s) of the Hellinger distance to another distribution with known density f0, namely

\mathfrak{H}(f,f_0)=\left\{1-\int\sqrt{f_0(x)/(x)}\text{d}x\right\}^{1/2}
Now, Paulo has posted an answer that is rather interesting, if formally “off the point”. There exists a natural unbiased estimator of if not of H, based on the original sample and using the alternative representation
\mathfrak{H}^2(f,f_0)=1-\mathbb{E}_f[\sqrt{f_0(X)/f(X)}]

for the Hellinger distance. In addition, this estimator is guaranteed to enjoy a finite variance since

\mathbb{E}_f[\sqrt{f_0(X)/f(X)}^2]=1\,.

Considering this question again, I am now fairly convinced there cannot be an unbiased estimator of H, as it behaves like a standard deviation for which there usually is no unbiased estimator!

Emails I cannot reply to

Posted in Books, Statistics with tags , , , on May 26, 2010 by xi'an

I received this email yesterday from a reader of The Bayesian Choice (still selling on Amazon at a bargain price of  $32.97!)

can you guid me about  following  question kindly please? in  your  book “the bayesian choice ” chap.2 problem 2.45 asked :
if x has gamma distribution with shap parameter alpha and scale parameter tetha , and tetha has gamma distribution  with “v ” and “x0” parameters as shape and scale parameters show that bayes estimatore of tetha under Hellinger loss function is of the form of “k/(x+x0)”
if  we calculate Hellinger loss function for this distribution we see a loss function with nearly beta distribution.
i tried to earn this answer for bayes estimator ,but i could not  see this answer, can u give me a hint for this question?

Alas (?) I obviously cannot reply without providing the answer…. Of course, if there is a problem with this exercise, just let me know! But once you write down the Hellinger loss as

\text{L}(\theta,\delta) = 1 - \int \sqrt{ f_\theta(x) \,f_\delta(x) } \,\text{d}x

the remainder of the exercise is sheer calculus…