Archive for copies


Posted in Kids, Statistics, University life with tags , , , , , , , on February 7, 2018 by xi'an
As in every term, here comes the painful week of grading hundreds of exams! My mathematical statistics exam was highly traditional and did not even involve Bayesian material, as the few students who attended the lectures were so eager to discuss sufficiency and ancilarity, that I decided to spend an extra lecture on these notions rather than rushing though conjugate priors. Highly traditional indeed with an inverse Gaussian model and a few basic consequences of Basu’s theorem. actually exposed during this lecture. Plus mostly standard multiple choices about maximum likelihood estimation and R programming… Among the major trends this year, I spotted out the widespread use of strange derivatives of negative powers, the simultaneous derivation of two incompatible convergent estimates, the common mixup between the inverse of a sum and the sum of the inverses, the inability to produce the MLE of a constant transform of the parameter, the choice of estimators depending on the parameter, and a lack of concern for Fisher informations equal to zero.

post-grading weekend

Posted in Kids, pictures, Statistics, University life with tags , , , , , , on January 19, 2015 by xi'an

IMG_2767Now my grading is over, I can reflect on the unexpected difficulties in the mathematical statistics exam. I knew that the first question in the multiple choice exercise, borrowed from Cross Validation, was going to  be quasi-impossible and indeed only one student out of 118 managed to find the right solution. More surprisingly, most students did not manage to solve the (absence of) MLE when observing that n unobserved exponential Exp(λ) were larger than a fixed bound δ. I was also amazed that they did poorly on a N(0,σ²) setup, failing to see that

\mathbb{E}[\mathbb{I}(X_1\le -1)] = \Phi(-1/\sigma)

and determine an unbiased estimator that can be improved by Rao-Blackwellisation. No student reached the conditioning part. And a rather frequent mistake more understandable due to the limited exposure they had to Bayesian statistics: many confused parameter λ with observation x in the prior, writing

\pi(\lambda|x) \propto \lambda \exp\{-\lambda x\} \times x^{a-1} \exp\{-bx\}

instead of

\pi(\lambda|x) \propto \lambda \exp\{-\lambda x\} \times \lambda^{a-1} \exp\{-b\lambda\}

hence could not derive a proper posterior.

grading weekend

Posted in Kids, pictures, Statistics, University life with tags , , , , on January 11, 2015 by xi'an