Archive for correction
a rush to grade
Posted in Kids, pictures, Statistics, University life with tags biking, correction, COVID-19, cycle path, France, grading, inverse Gaussian distribution, lockdown, midterms, pandemic, Paris on October 29, 2020 by xi'anexams
Posted in Kids, Statistics, University life with tags Basu's theorem, bootstrap, convergence, copies, correction, exam, mathematical statistics, Université Paris Dauphine on February 7, 2018 by xi'ana mistake in a 1990 paper
Posted in Kids, Statistics, University life with tags correction, CRC Press, handbook of mixture analysis, improper priors, Jean Diebolt, JRSSB, mixtures of distributions, Royal Statistical Society, Series B on August 7, 2016 by xi'anAs we were working on the Handbook of mixture analysis with Sylvia Früwirth-Schnatter and Gilles Celeux today, near Saint-Germain des Près, I realised that there was a mistake in our 1990 mixture paper with Jean Diebolt [published in 1994], in that when we are proposing to use improper “Jeffreys” priors under the restriction that no component of the Gaussian mixture is “empty”, meaning that there are at least two observations generated from each component, the likelihood needs to be renormalised to be a density for the sample. This normalisation constant only depends on the weights of the mixture, which means that, when simulating from the full conditional distribution of the weights, there should be an extra-acceptance step to account for this correction. Of course, the term is essentially equal to one for a large enough sample but this remains a mistake nonetheless! It is funny that it remained undetected for so long in my most cited paper. Checking on Larry’s 1999 paper exploring the idea of excluding terms from the likelihood to allow for improper priors, I did not spot him using a correction either.
Random [uniform?] sudokus [corrected]
Posted in R, Statistics with tags combinatorics, correction, Monte Carlo, simulation, sudoku, uniformity on May 19, 2010 by xi'anAs the discrepancy [from 1] in the sum of the nine probabilities seemed too blatant to be attributed to numerical error given the problem scale, I went and checked my R code for the probabilities and found a choose(9,3) instead of a choose(6,3) in the last line… The fit between the true distribution and the observed frequencies is now much better
but the chi-square test remains suspicious of the uniform assumption (or again of my programming abilities):
> chisq.test(obs,p=pdiag)
Chi-squared test for given probabilities
data: obs
X-squared = 16.378, df = 6, p-value = 0.01186
since a p-value of 1% is a bit in the far tail of the distribution.