Archive for conditioning

order, order!

Posted in Books, pictures, Statistics, University life with tags , , , , , , on June 9, 2020 by xi'an

A very standard (one-line) question on X validated, namely whether min(X,Y) could enjoy a finite mean when both X and Y had infinite means [the answer is yes, possibly!] brought a lot of traffic, including an incorrect answer and bringing it to be one of the “Hot Network Questions“, for no clear reason. Beside my half-Cauchy example, some answers pointed out the connection between mean and cdf, as integrated cdf on the negative half-line and integrated complement cdf on the positive half-line, and between mean and quantile function, as

\mathbb E[T(X)]=\int_0^1 T(Q_X(u))\text{d}u

since it nicely expands to

\mathbb E[T(X_{(k)})]=\int_0^1 \frac{u^{k-1}(1-u)^{n-k-1}}{B(k,n-k)}T(Q_X(u))\text{d}u

but I remain bemused by the excitement..! (Including the many answers and the lack of involvement of the OP.)

Bertrand-Borel debate

Posted in Books, Statistics with tags , , , , , , , , , , , , , on May 6, 2019 by xi'an

On her blog, Deborah Mayo briefly mentioned the Bertrand-Borel debate on the (in)feasibility of hypothesis testing, as reported [and translated] by Erich Lehmann. A first interesting feature is that both [starting with] B mathematicians discuss the probability of causes in the Bayesian spirit of Laplace. With Bertrand considering that the prior probabilities of the different causes are impossible to set and then moving all the way to dismiss the use of probability theory in this setting, nipping the p-values in the bud..! And Borel being rather vague about the solution probability theory has to provide. As stressed by Lehmann.

“The Pleiades appear closer to each other than one would naturally expect. This statement deserves thinking about; but when one wants to translate the phenomenon into numbers, the necessary ingredients are lacking. In order to make the vague idea of closeness more precise, should we look for the smallest circle that contains the group? the largest of the angular distances? the sum of squares of all the distances? the area of the spherical polygon of which some of the stars are the vertices and which contains the others in its interior? Each of these quantities is smaller for the group of the Pleiades than seems plausible. Which of them should provide the measure of implausibility? If three of the stars form an equilateral triangle, do we have to add this circumstance, which is certainly very unlikely apriori, to those that point to a cause?” Joseph Bertrand (p.166)

 

“But whatever objection one can raise from a logical point of view cannot prevent the preceding question from arising in many situations: the theory of probability cannot refuse to examine it and to give an answer; the precision of the response will naturally be limited by the lack of precision in the question; but to refuse to answer under the pretext that the answer cannot be absolutely precise, is to place oneself on purely abstract grounds and to misunderstand the essential nature of the application of mathematics.” Emile Borel (Chapter 4)

Another highly interesting objection of Bertrand is somewhat linked with his conditioning paradox, namely that the density of the observed unlikely event depends on the choice of the statistic that is used to calibrate the unlikeliness, which makes complete sense in that the information contained in each of these statistics and the resulting probability or likelihood differ to an arbitrary extend, that there are few cases (monotone likelihood ratio) where the choice can be made, and that Bayes factors share the same drawback if they do not condition upon the entire sample. In which case there is no selection of “circonstances remarquables”. Or of uniformly most powerful tests.

Binomial vs Bernoulli

Posted in Books, Statistics with tags , , , , on December 25, 2018 by xi'an

An interesting confusion on X validated where someone was convinced that using the Bernoulli representation of a sequence of Bernoulli experiments led to different posterior probabilities of two possible models than when using their Binomial representation. The confusion actually stemmed from using different conditionals, namely N¹=4,N²=1 in the first case (for a model M¹ with two probabilities p¹ and p²) and N¹+N²=5 in the second case (for a model M² with a single probability p⁰). While (N¹,N²) is sufficient for the first model and N¹+N² is sufficient for the second model, P(M¹|N¹,N²) is not commensurable to P(M²|N¹+N²)! Another illustration of the fickleness of the notion of sufficiency when comparing models.

absurdly unbiased estimators

Posted in Books, Kids, Statistics with tags , , , , , , , on November 8, 2018 by xi'an

“…there are important classes of problems for which the mathematics forces the existence of such estimators.”

Recently I came through a short paper written by Erich Lehmann for The American Statistician, Estimation with Inadequate Information. He analyses the apparent absurdity of using unbiased estimators or even best unbiased estimators in settings like the Poisson P(λ) observation X producing the (unique) unbiased estimator of exp(-bλ) equal to

(1-b)^x

which is indeed absurd when b>1. My first reaction to this example is that the question of what is “best” for a single observation is not very meaningful and that adding n independent Poisson observations replaces b with b/n, which gets eventually less than one. But Lehmann argues that the paradox stems from a case of missing information, as for instance in the Poisson example where the above quantity is the probability P(T=0) that T=0, when T=X+Y, Y being another unobserved Poisson with parameter (b-1)λ. In a lot of such cases, there is no unbiased estimator at all. When there is any, it must take values outside the (0,1) range, thanks to a lemma shown by Lehmann that the conditional expectation of this estimator given T is either zero or one.

I find the short paper quite interesting in exposing some reasons why the estimators cannot find enough information within the data (often a single point) to achieve an efficient estimation of the targeted function of the parameter, even though the setting may appear rather artificial.

Bayesians conditioning on sets of measure zero

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , on September 25, 2018 by xi'an

Although I have already discussed this point repeatedly on this ‘Og, I found myself replying to [yet] another question on X validated about the apparent paradox of conditioning on a set of measure zero, as for instance when computing

P(X=.5 | |X|=.5)

which actually has nothing to do with Bayesian inference or Bayes’ Theorem, but is simply wondering about the definition of conditional probability distributions. The OP was correct in stating that

P(X=x | |X|=x)

was defined up to a set of measure zero. And even that

P(X=.5 | |X|=.5)

could be defined arbitrarily, prior to the observation of |X|. But once |X| is observed, say to take the value 0.5, there is a zero probability that this value belongs to the set of measure zero where one defined

P(X=x | |X|=x)

arbitrarily. A point that always proves delicate to explain in class…!