Archive for cross validated
conditional confusion [X validated]
Posted in Statistics with tags cross validated, basic probability on January 23, 2023 by xi'anThose who live by ChatGPT are destined to get advice of unpredictable quality
Posted in Statistics with tags ANOVA, ChatGTP, cross validated, predictor, R, regression on January 18, 2023 by xi'ancontinuously tough [screenshots]
Posted in Books, Kids, Statistics, University life with tags basic probability, continuity, continuous ransom variable, cross validated, density, probability on December 27, 2022 by xi'an
As I was looking to a pointer on continuous random variables for an X validated question, I came across those two “definitions”…
a [counter]example of minimaxity
Posted in Books, Kids, Statistics, University life with tags 1984, cross validated, Geometric distribution, George Orwell, least favourable priors, minimaxity, negative binomial distribution, Statistical Decision Theory and Bayesian Analysis, The Bayesian Choice on December 14, 2022 by xi'anA chance question on X validated made me reconsider about the minimaxity over the weekend. Consider a Geometric G(p) variate X. What is the minimax estimator of p under squared error loss ? I thought it could be obtained via (Beta) conjugate priors, but following Dyubin (1978) the minimax estimator corresponds to a prior with point masses at ¼ and 1, resulting in a constant estimator equal to ¾ everywhere, except when X=0 where it is equal to 1. The actual question used a penalised qaudratic loss, dividing the squared error by p(1-p), which penalizes very strongly errors at p=0,1, and hence suggested an estimator equal to 1 when X=0 and to 0 otherwise. This proves to be the (unique) minimax estimator. With constant risk equal to 1. This reminded me of this fantastic 1984 paper by Georges Casella and Bill Strawderman on the estimation of the normal bounded mean, where the least favourable prior is supported by two atoms if the bound is small enough. Figure 1 in the Negative Binomial extension by Morozov and Syrova (2022) exploits the same principle. (Nothing Orwellian there!) If nothing else, a nice illustration for my Bayesian decision theory course!
observed vs. complete in EM algorithm
Posted in Statistics with tags cross validated, EM algorithm, expectation maximisation, latent variable models, missing values, numerical maximisation on November 17, 2022 by xi'anWhile answering a question related with the EM algorithm on X validated, I realised a global (or generic) feature of the (objective) E function, namely that
can always be written as
therefore always includes the (log-) observed likelihood, at least in this formal representation. While the proof that EM is monotonous in the values of the observed likelihood uses this decomposition as well, in that
I wonder if the appearance of the actual target in the temporary target E(θ’|θ) can be exploited any further.