Archive for conditional probability

In Bayesian statistics, data is considered nonrandom…

Posted in Books, Statistics, University life with tags , , , , , on July 12, 2021 by xi'an

A rather weird question popped up on X validated, namely why does Bayesian analysis rely on a sampling distribution if the data is nonrandom. While a given sample is is indeed a deterministic object and hence nonrandom from this perspective!, I replied that on the opposite Bayesian analysis was setting the observed data as the realisation of a random variable in order to condition upon this realisation to construct a posterior distribution on the parameter. Which is quite different from calling it nonrandom! But, presumably putting too much meaning and spending too much time on this query, I remain somewhat bemused by what line of thought led to this question…

baby please don’t cry

Posted in Statistics with tags , , , , on April 9, 2021 by xi'an

First, an express riddle from the Riddler of last week:

An infant naps peacefully for two hours at a time and then wakes up, crying, due to hunger. After eating quickly, the infant plays alone for another hour, and then cries due to tiredness. This cycle repeats over the course of a 12-hour day. (The baby sleeps peacefully 12 hours through the night.) At a random time during the day, you spend 30 minutes with your baby and then the baby cries. What’s the probability that your baby is hungry?

The probabilistic setting is somewhat unclear, in particular because the last daytime nap is followed immediately with a 12 hour night sleep. Or the 12 hour night sleep is immediately followed by a one or two hour nap. Assuming a random starting time over the 12 hour period, denoting X as the time to the next crisis and Y as the nature of the cries (H versus T), it is straightforward to show that P(Y=H|X=30′) is ½. While it would be 1 for any duration larger than one hour.

Followed by an extra one this week:

Starting at a random time, 30 minutes go by with no cries. What is the probability that the next time your baby cries she will be hungry?

Which means computing P(Y=H|X>30′). Equal to ¾ in this case.

conditioning on zero probability events

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , on November 15, 2019 by xi'an

An interesting question on X validated as to how come a statistic T(X) can be sufficient when its support depends on the parameter θ behind the distribution of X. The reasoning there being that the distribution of X given T(X)=t does depend on θ since it is not defined for some values of θ … Which is not correct in that the conditional distribution of X depends on the realisation of T, meaning that if this realisation is impossible, then the conditional is arbitrary and of no relevance. Which also led me to tangentially notice and bemoan that most (Stack) exchanges on conditioning on zero probability events are pretty unsatisfactory in that they insist on interpreting P(X=x) [equal to zero] in a literal sense when it is merely a notation in the continuous case. And undefined when X has a discrete support. (Conditional probability is always a sore point for my students!)

Bayesians conditioning on sets of measure zero

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , on September 25, 2018 by xi'an

Although I have already discussed this point repeatedly on this ‘Og, I found myself replying to [yet] another question on X validated about the apparent paradox of conditioning on a set of measure zero, as for instance when computing

P(X=.5 | |X|=.5)

which actually has nothing to do with Bayesian inference or Bayes’ Theorem, but is simply wondering about the definition of conditional probability distributions. The OP was correct in stating that

P(X=x | |X|=x)

was defined up to a set of measure zero. And even that

P(X=.5 | |X|=.5)

could be defined arbitrarily, prior to the observation of |X|. But once |X| is observed, say to take the value 0.5, there is a zero probability that this value belongs to the set of measure zero where one defined

P(X=x | |X|=x)

arbitrarily. A point that always proves delicate to explain in class…!

all those ε’s…

Posted in Kids, pictures, Statistics, University life with tags , , , , , , on October 25, 2017 by xi'an

A revealing [and interesting] question on X validated about ε’s… The question was about the apparent contradiction in writing Normal random variates as the sum of their mean and of a random noise ε in the context of the bivariate Normal variate (x,y), since using the marginal x conditional decomposition led to two different sets of ε’s. Which did not seem to agree. I replied about these ε’s having to live in different σ-algebras, but this reminded me of some paradoxes found in fiducial analysis through this incautious manipulation of ε’s…