As in every term, here comes the painful week of grading hundreds of exams! My
mathematical statistics exam was highly traditional and did not even involve Bayesian material, as the few students who attended the lectures were so eager to discuss sufficiency and ancilarity, that I decided to spend
an extra lecture on these notions rather than rushing though conjugate priors. Highly traditional indeed with an inverse Gaussian model and a few basic consequences of Basu’s theorem. actually exposed during this lecture. Plus mostly standard multiple choices about maximum likelihood estimation and R programming… Among the major trends this year, I spotted out the widespread use of strange derivatives of negative powers, the simultaneous derivation of two incompatible convergent estimates, the common mixup between the inverse of a sum and the sum of the inverses, the inability to produce the MLE of a constant transform of the parameter, the choice of estimators depending on the parameter, and a lack of concern for Fisher informations equal to zero.
Like this:
Like Loading...
Related
This entry was posted on February 7, 2018 at 12:18 am and is filed under Kids, Statistics, University life with tags Basu's theorem, bootstrap, convergence, copies, correction, exam, mathematical statistics, Université Paris Dauphine. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
February 9, 2018 at 6:32 pm
I just found that this is known as the Neyman-Scott paradox, Econometrica, 16, 1 (1948) although there is nothing paradoxical in it.
February 9, 2018 at 10:21 pm
Yes indeed! I was looking for the name. The paradox from a Bayesian perspective is that the Jeffreys prior on the full model leads to an inadmissible and inconsistent estimator of the variance, while the Jeffreys prior on the MLE model produces a consistent estimator!
February 9, 2018 at 4:45 pm
Christian,
This is really a good question for an exam despite its simplicity.
Your solution results from direcly expressing the likelihood of the difference Zi=Xi1-Xi2 for i=1 to n that its free of the mu(i)’s; this corresponds exactly to a residual (R Thompson’s acronym) or restricted (D Harville’s one) likelihood.
In classical ML, you have to estimate both the variance sigma2 but also the mu(i)’s jointly from the likellihood of the original data resulting in half the previous estimator.
Incidentally, another quadratic unbiased estimator is
sigma2=(sum over i =1 to n of (Xi1*2-Xi1 x Xi2) )/n but it is not translation invariant. Charles Henderson used to give this example to illustrate the fact that an unbiased estimator is not necessarily translation invariant.
February 9, 2018 at 6:19 pm
Yes, discussing the matter this morning with Jean-Michel, I eventually realised the mistake! Merci!
February 7, 2018 at 11:04 am
Thanks Christian for this very challenging exam!
In fact in Ex 1, question 1) d) is the REML (residual maximum likelihood estimator, Patterson & Thompson, 1971) while (e)=(d)/2 is the usual ML (maximum likelihood estimator). The first is unbiased while the second is not.