An interesting question on X validated as to how come a statistic T(X) can be sufficient when its support depends on the parameter θ behind the distribution of X. The reasoning there being that the distribution of X given T(X)=t does depend on θ since it is not defined for some values of θ … Which is not correct in that the conditional distribution of X depends on the realisation of T, meaning that if this realisation is impossible, then the conditional is arbitrary and of no relevance. Which also led me to tangentially notice and bemoan that most (Stack) exchanges on conditioning on zero probability events are pretty unsatisfactory in that they insist on interpreting P(X=x) [equal to zero] in a literal sense when it is merely a notation in the continuous case. And undefined when X has a discrete support. (Conditional probability is always a sore point for my students!)
Archive for sufficiency
conditioning on zero probability events
Posted in Books, Kids, pictures, Statistics, University life with tags conditional probability, cross validated, StackExchange, sufficiency, zero measure set on November 15, 2019 by xi'anBinomial vs Bernoulli
Posted in Books, Statistics with tags Bayesian model choice, Bayesian statistics, conditioning, cross validated, sufficiency on December 25, 2018 by xi'anAn interesting confusion on X validated where someone was convinced that using the Bernoulli representation of a sequence of Bernoulli experiments led to different posterior probabilities of two possible models than when using their Binomial representation. The confusion actually stemmed from using different conditionals, namely N¹=4,N²=1 in the first case (for a model M¹ with two probabilities p¹ and p²) and N¹+N²=5 in the second case (for a model M² with a single probability p⁰). While (N¹,N²) is sufficient for the first model and N¹+N² is sufficient for the second model, P(M¹|N¹,N²) is not commensurable to P(M²|N¹+N²)! Another illustration of the fickleness of the notion of sufficiency when comparing models.
absurdly unbiased estimators
Posted in Books, Kids, Statistics with tags best unbiased estimator, completeness, conditioning, Erich Lehmann, sufficiency, The American Statistician, UMVUE, unbiased estimation on November 8, 2018 by xi'an“…there are important classes of problems for which the mathematics forces the existence of such estimators.”
Recently I came through a short paper written by Erich Lehmann for The American Statistician, Estimation with Inadequate Information. He analyses the apparent absurdity of using unbiased estimators or even best unbiased estimators in settings like the Poisson P(λ) observation X producing the (unique) unbiased estimator of exp(-bλ) equal to
which is indeed absurd when b>1. My first reaction to this example is that the question of what is “best” for a single observation is not very meaningful and that adding n independent Poisson observations replaces b with b/n, which gets eventually less than one. But Lehmann argues that the paradox stems from a case of missing information, as for instance in the Poisson example where the above quantity is the probability P(T=0) that T=0, when T=X+Y, Y being another unobserved Poisson with parameter (b-1)λ. In a lot of such cases, there is no unbiased estimator at all. When there is any, it must take values outside the (0,1) range, thanks to a lemma shown by Lehmann that the conditional expectation of this estimator given T is either zero or one.
I find the short paper quite interesting in exposing some reasons why the estimators cannot find enough information within the data (often a single point) to achieve an efficient estimation of the targeted function of the parameter, even though the setting may appear rather artificial.