Archive for grading

a rush to grade

Posted in Kids, pictures, Statistics, University life with tags , , , , , , , , , , on October 29, 2020 by xi'an

Terry Tao on Bayes… and Trump

Posted in Books, Kids, Statistics, University life with tags , , , , , , , on June 13, 2016 by xi'an

“From the perspective of Bayesian probability, the grade given to a student can then be viewed as a measurement (in logarithmic scale) of how much the posterior probability that the student’s model was correct has improved over the prior probability.” T. Tao, what’s new, June 1

Jean-Michel Marin pointed out to me the recent post of Terry Tao on setting a subjective prior for allocating partial credits to multiple answer questions. (Although I would argue that the main purpose of multiple answer questions is to expedite grading!) The post considers only true-false questionnaires and the case when the student produces a probabilistic assessment of her confidence in the answer. In the format of a probability p for each question. The goal is then to devise a grading principle, f, such that f(p) goes to the right answer and f(1-p) to the wrong answer. This sounds very much like scoring weather forecasters and hence designing proper scoring rules. Which reminds me of the first time I heard a talk about this: it was in Purdue, circa 1988, and Morrie DeGroot gave a talk on scoring forecasters, based on a joint paper he had written with Susie Bayarri. The scoring rule is proper if the expected reward leads to pick p=q when p is the answer given by the student and q her true belief. Terry Tao reaches the well-known conclusion that the grading function f should be f(p)=log²(2p) where log² denotes the base 2 logarithm. One property I was unaware of is that the total expected score writes as N+log²(L) where L is the likelihood associated with the student’s subjective model. (This is the only true Bayesian aspect of the problem.)

An interesting and more Bayesian last question from Terry Tao is about what to do when the probabilities themselves are uncertain. More Bayesian because this is where I would introduce a prior model on this uncertainty, in a hierarchical fashion, in order to estimate the true probabilities. (A non-informative prior makes its way into the comments.) Of course, all this leads to a lot of work given the first incentive of asking multiple choice questions…

One may wonder at the link with scary Donald and there is none! But the next post by Terry Tao is entitled “It ought to be common knowledge that Donald Trump is not fit for the presidency of the United States of America”. And unsurprisingly, as an opinion post, it attracted a large number of non-mathematical comments.

done! [#2]

Posted in Kids, Statistics, University life with tags , , , , , , , , , on January 21, 2016 by xi'an

exosPhew! I just finished my enormous pile of homeworks for the computational statistics course… This massive pile is due to an unexpected number of students registering for the Data Science Master at ENSAE and Paris-Dauphine. As I was not aware of this surge, I kept to my practice of asking students to hand back solved exercises from Monte Carlo Statistical Methods at the beginning of each class. And could not change the rules of the game once the course had started! Next year, I’ll make sure to get some backup for grading those exercises. Or go for group projects instead…

done! [#1]

Posted in Kids, pictures, University life with tags , , , , , , on January 16, 2016 by xi'an

After spending a few hours grading my 127 exams for most nights of this week, I am finally done with it! One of the exam questions was the simulation of XY when (X,Y) is a bivariate normal vector with correlation ρ, following the trick described in a X validated question asked a few months ago, namely that

XY≡R{cos(πU)+ρ}

but no one managed to establish this representation. And, as usual, some students got confused between parameters θ and observations x when writing a posterior density, since the density of the prior was defined in the exam with the dummy x, thereby recovering the prior as the posterior. Nothing terrible and nothing exceptional with this cohort of undergraduates. And now I still have to go through my second pile of exams for the graduate course I taught on Bayesian computational tools…

post-grading weekend

Posted in Kids, pictures, Statistics, University life with tags , , , , , , on January 19, 2015 by xi'an

IMG_2767Now my grading is over, I can reflect on the unexpected difficulties in the mathematical statistics exam. I knew that the first question in the multiple choice exercise, borrowed from Cross Validation, was going to  be quasi-impossible and indeed only one student out of 118 managed to find the right solution. More surprisingly, most students did not manage to solve the (absence of) MLE when observing that n unobserved exponential Exp(λ) were larger than a fixed bound δ. I was also amazed that they did poorly on a N(0,σ²) setup, failing to see that

\mathbb{E}[\mathbb{I}(X_1\le -1)] = \Phi(-1/\sigma)

and determine an unbiased estimator that can be improved by Rao-Blackwellisation. No student reached the conditioning part. And a rather frequent mistake more understandable due to the limited exposure they had to Bayesian statistics: many confused parameter λ with observation x in the prior, writing

\pi(\lambda|x) \propto \lambda \exp\{-\lambda x\} \times x^{a-1} \exp\{-bx\}

instead of

\pi(\lambda|x) \propto \lambda \exp\{-\lambda x\} \times \lambda^{a-1} \exp\{-b\lambda\}

hence could not derive a proper posterior.

%d bloggers like this: