Archive for jatp
50/50 photography competition [another public image]
Posted in Statistics with tags competition, jatp, landscape photography, Old Man of Storr, Université Paris Dauphine, Vert le Futur on February 17, 2019 by xi'ansay cheese [jatp]
Posted in Statistics with tags brains, british cheeses, cheesemonger, Instagram, jatp, Jericho Cheese, Little Clarendon Street, Oxford, pointfivegully, Stichelton, Stilton on February 16, 2019 by xi'anA picture taken at Jericho Cheese, Little Clarendon Street, Oxford, that I took this week when I visited this cheesemonger for the first time, after several years of passing by its tantalizing display of British cheeses! It happens to have become my most popular picture on Instagram, ranking above the fiery sunrise over the Calanques, and the alignment of brains at the Institute of Brain in Paris!
Fisher’s lost information
Posted in Books, Kids, pictures, Statistics, Travel with tags asymptotic variance, central limit theorem, Fisher information, Fréchet-Darmois-Cramèr-Rao bound, Ganges, jatp, uniform distribution, Varanasi on February 11, 2019 by xi'anAfter a post on X validated and a good discussion at work, I came to the conclusion [after many years of sweeping the puzzle under the carpet] that the (a?) Fisher information obtained for the Uniform distribution U(0,θ) as θ⁻¹ is meaningless. Indeed, there are many arguments:
- The lack of derivability of the indicator function for x=θ is a non-issue since the derivative is defined almost everywhere.
- In many textbooks, the Fisher information θ⁻² is derived from the Fréchet-Darmois-Cramèr-Rao inequality, which does not apply for the Uniform U(0,θ) distribution.
- One connected argument for the expression of the Fisher information as the expectation of the squared score is that it is the variance of the score, since its expectation is zero. Except that it is not zero for the Uniform U(0,θ) distribution.
- For the same reason, the opposite of the second derivative of the log-likelihood is not equal to the expectation of the squared score. It is actually -θ⁻²!
- Looking at the Taylor expansion justification of the (observed) Fisher information, expanding the log-likelihood around the maximum likelihood estimator does not work since the maximum likelihood estimator does not cancel the score.
- When computing the Fisher information for an n-sample rather than a 1-sample, the information is n²θ⁻², rather than nθ⁻².
- Since the speed of convergence of the maximum likelihood estimator is of order n⁻², the central limit theorem does not apply and the limiting variance of the maximum likelihood estimator is not the Fisher information.