Archive for cross validated

In Bayesian statistics, data is considered nonrandom…

Posted in Books, Statistics, University life with tags , , , , , on July 12, 2021 by xi'an

A rather weird question popped up on X validated, namely why does Bayesian analysis rely on a sampling distribution if the data is nonrandom. While a given sample is is indeed a deterministic object and hence nonrandom from this perspective!, I replied that on the opposite Bayesian analysis was setting the observed data as the realisation of a random variable in order to condition upon this realisation to construct a posterior distribution on the parameter. Which is quite different from calling it nonrandom! But, presumably putting too much meaning and spending too much time on this query, I remain somewhat bemused by what line of thought led to this question…

conditioning an algorithm

Posted in Statistics with tags , , , , , , , , , , , on June 25, 2021 by xi'an

A question of interest on X validated: given a (possibly black-box) algorithm simulating from a joint distribution with density [wrt a continuous measure] p(z,y) (how) is it possible to simulate from the conditional p(y|z⁰)? Which reminded me of a recent paper by Lindqvist et al. on conditional Monte Carlo. Which zooms on the simulation of a sample X given the value of a sufficient statistic, T(X)=t, revolving about pivotal quantities and inversions à la fiducial statistics, following an earlier Biometrika paper by Lindqvist & Taraldsen, in 2005. The idea is to write

X=\chi(U,\theta)\qquad T(X)=\tau(U,\theta)

where U has a distribution that depends on θ, to solve τ(u,θ)=t in θ for a given pair (u,t) with solution θ(u,t) and to generate u conditional on this solution. But this requires getting “under the hood” of the algorithm to such an extent as not answering the original question, or being open to other solutions using the expression for the joint density p(z,y)… In a purely black box situation, ABC appears as the natural if approximate solution.

scale matters [maths as well]

Posted in pictures, R, Statistics with tags , , , , , , , , on June 2, 2021 by xi'an

A question from X validated on why an independent Metropolis sampler of a three component Normal mixture based on a single Normal proposal was failing to recover the said mixture…

When looking at the OP’s R code, I did not notice anything amiss at first glance (I was about to drive back from Annecy, hence did not look too closely) and reran the attached code with a larger variance in the proposal, which returned the above picture for the MCMC sample, close enough (?) to the target. Later, from home, I checked the code further and noticed that the Metropolis ratio was only using the ratio of the targets. Dividing by the ratio of the proposals made a significant (?) to the representation of the target.

More interestingly, the OP was fundamentally confused between independent and random-walk Rosenbluth algorithms, from using the wrong ratio to aiming at the wrong scale factor and average acceptance ratio, and furthermore challenged by the very notion of Hessian matrix, which is often suggested as a default scale.

unbalanced sampling

Posted in pictures, R, Statistics with tags , , , , , , , on May 17, 2021 by xi'an


A question from X validated on sampling from an unknown density f when given both a sample from the density f restricted to a (known) interval A , say, and a sample from f restricted to the complement of A, say. Or at least on producing an estimate of the mass of A under f, p(A)

The problem sounds impossible to solve without an ability to compute the density value at a given value, since  any convex combination αf¹+(1-α)f² would return the same two samples. Assuming continuity of the density f at the boundary point a between A and its complement, a desperate solution for p(A)/1-p(A) is to take the ratio of the density estimates at the value a, which turns out not so poor an approximation if seemingly biased. This was surprising to me as kernel density estimates are notoriously bad at boundary points.

If f(x) can be computed [up to a constant] at an arbitrary x, it is obviously feasible to simulate from f and approximate p(A). But the problem is then moot as a resolution would not even need the initial samples. If exploiting those to construct a single kernel density estimate, this estimate can be used as a proposal in an MCMC algorithm. Surprisingly (?), using instead the empirical cdf as proposal does not work.

warped Cauchys

Posted in Books, Kids, R, Statistics with tags , , , on May 4, 2021 by xi'an

A somewhat surprising request on X validated about the inverse cdf representation of a wrapped Cauchy distribution. I had not come across this distribution, but its density being

f_{WC}(\theta;\gamma)=\sum_{n=-\infty}^\infty \frac{\gamma}{\pi(\gamma^2+(\theta+2\pi n)^2)}\mathbb I_{ -\pi<\theta<\pi}

means that it is the superposition of shifted Cauchys on the unit circle (with nice complex representations). As such, it is easily simulated by re-shifting a Cauchy back to (-π,π), i.e. using the inverse transform

\theta = [\gamma\tan(\pi U-\pi/2)+\pi]\ \text{mod}\,(2\pi) - \pi