Archive for email

unexpected thanks

Posted in Statistics, University life with tags , , , , on April 16, 2022 by xi'an

I received the following email from an author the other day, after rejecting their paper right after submission:

Dear Prof. Christian Robert,

Thank you very much for your assessment of the paper, your candid feedback and also your encouragement to submit the paper to another journal. We value very much this quick and constructive feedback and the time you need to invest to guarantee such a feedback policy.
Thank you for taking the time to consider our submission and best regards,
Which does not happen that often.

email footprint

Posted in Travel, University life with tags , , , , , on September 14, 2019 by xi'an

While I was wondering (im Salzburg) at the carbon impact of sending emails with an endless cascade of the past history of exchanges and replies, I found this (rather rudimentary) assessment  that, while standard emails had an average impact of 4g, those with long attachments could cost 50g, quoting from Burners-Lee, leading to the fairly astounding figure of an evaluated impact of 1.6 kg a day or more than half a ton per year! Quite amazing when considering that a round flight Paris-Birmingham is producing 80kg. Hence justifying a posteriori my habit of removing earlier emails when replying to them. (It takes little effort to do so, especially in mailers where this feature can be set as the default option.)

 

about paradoxes

Posted in Books, Kids, Statistics, University life with tags , , , , , , , , , , on December 5, 2017 by xi'an

An email I received earlier today about statistical paradoxes:

I am a PhD student in biostatistics, and an avid reader of your work. I recently came across this blog post, where you review a text on statistical paradoxes, and I was struck by this section:

“For instance, the author considers the MLE being biased to be a paradox (p.117), while omitting the much more substantial “paradox” of the non-existence of unbiased estimators of most parameters—which simply means unbiasedness is irrelevant. Or the other even more puzzling “paradox” that the secondary MLE derived from the likelihood associated with the distribution of a primary MLE may differ from the primary. (My favourite!)”

I found this section provocative, but I am unclear on the nature of these “paradoxes”. I reviewed my stat inference notes and came across the classic example that there is no unbiased estimator for 1/p w.r.t. a binomial distribution, but I believe you are getting at a much more general result. If it’s not too much trouble, I would sincerely appreciate it if you could point me in the direction of a reference or provide a bit more detail for these two “paradoxes”.

The text is Chang’s Paradoxes in Scientific Inference, which I indeed reviewed negatively. To answer about the bias “paradox”, it is indeed a neglected fact that, while the average of any transform of a sample obviously is an unbiased estimator of its mean (!), the converse does not hold, namely, an arbitrary transform of the model parameter θ is not necessarily enjoying an unbiased estimator. In Lehmann and Casella, Chapter 2, Section 4, this issue is (just slightly) discussed. But essentially, transforms that lead to unbiased estimators are mostly the polynomial transforms of the mean parameters… (This also somewhat connects to a recent X validated question as to why MLEs are not always unbiased. Although the simplest explanation is that the transform of the MLE is the MLE of the transform!) In exponential families, I would deem the range of transforms with unbiased estimators closely related to the collection of functions that allow for inverse Laplace transforms, although I cannot quote a specific result on this hunch.

The other “paradox” is that, if h(X) is the MLE of the model parameter θ for the observable X, the distribution of h(X) has a density different from the density of X and, hence, its maximisation in the parameter θ may differ. An example (my favourite!) is the MLE of ||a||² based on x N(a,I) which is ||x||², a poor estimate, and which (strongly) differs from the MLE of ||a||² based on ||x||², which is close to (1-p/||x||²)²||x||² and (nearly) admissible [as discussed in the Bayesian Choice].

can you help?

Posted in Statistics, University life with tags , , , , , , , on October 12, 2013 by xi'an

An email received a few days ago:

Can you help me answering my query about AIC and DIC?

I want to compare the predictive power of a non Bayesian model (GWR, Geographically weighted regression) and a Bayesian hierarchical model (spLM).
For GWR, DIC is not defined, but AIC is.
For  spLM, AIC is not defined, but DIC is.

How can I compare the predictive ability of these two models? Does it make sense to compare AIC of one with DIC of the other?

I did not reply as the answer is in the question: the numerical values of AIC and DIC do not compare. And since one estimation is Bayesian while the other is not, I do not think the predictive abilities can be compared. This is not even mentioning my reluctance to use DIC…as renewed in yesterday’s post.

question about Gibbs sampling

Posted in Books, Statistics, University life with tags , , on October 16, 2012 by xi'an

Here is an email I got yesterday

Voila ma question: Comment programmer avec le Matlab la fonction de densité a posteriori (n’est pas de type connu qui égale au produit la fonction de vraisemblance et la densité de la loi a priori) pour calculer la valeur de cette fonction en un point theta=x (theta est le paramètre a estimer) en fixant les autres paramètres.

Here is my question: How to program with Matlab the posterior density function (which is not of a well-known type and which equals the product of the likelihood function by the prior density) for calculating the value of this function at a point theta = x (theta is the parameter estimate) while keeping the other parameters fixed.

which is a bit naïve, especially the Matlab part… I answered that the programming issue was kind of straightforward when the computation of both the prior density function and the likelihood function was feasible. (With Matlab or any other language.)

%d bloggers like this: