Archive for cross validated

ratio of Gaussians

Posted in Books, Statistics, University life with tags , , , , , , , , on April 12, 2021 by xi'an

Following (as usual) an X validated question, I came across two papers of George Marsaglia on the ratio of two arbitrary (i.e. unnormalised and possibly correlated) Normal variates. One was a 1965 JASA paper,

where the density of the ratio X/Y is exhibited, based on the fact that this random variable can always be represented as (a+ε)/(b+ξ) where ε,ξ are iid N(0,1) and a,b are constant. Surprisingly (?), this representation was challenged in a 1969 paper by David Hinkley (corrected in 1970).

And less surprisingly the ratio distribution behaves almost like a Cauchy, since its density is

meaning it is a two-component mixture of a Cauchy distribution, with weight exp(-a²/2-b²/2), and of an altogether more complex distribution ƒ². This is remarked by Marsaglia in the second 2006 paper, although the description of the second component remains vague, besides a possible bimodality. (It could have a mean, actually.) The density ƒ² however resembles (at least graphically) the generalised Normal inverse density I played with, eons ago.

hands-on probability 101

Posted in Books, Kids, pictures, Statistics, University life with tags , , , , , , , , , on April 3, 2021 by xi'an

When solving a rather simple probability question on X validated, namely the joint uniformity of the pair

(X,Y)=(A-B+\mathbb I_{A<B},C-B+\mathbb I_{C<B})

when A,B,C are iid U(0,1), I chose a rather pedestrian way and derived the joint distribution of (A-B,C-B), which turns to be made of 8 components over the (-1,1)² domain. And to conclude at the uniformity of the above, I added a hand-made picture to explain why the coverage by (X,Y) of any (red) square within (0,1)² was uniform by virtue of the symmetry between the coverage by (A-B,C-B) of four copies of the (red) square, using color tabs that were sitting on my desk..! It did not seem to convince the originator of the question, who kept answering with more questions—or worse an ever-changing question, reproduced in real time on math.stackexchange!, revealing there that said originator was tutoring an undergrad student!—but this was a light moment in a dreary final day before a new lockdown.

composition versus inversion

Posted in Books, Kids, R, Statistics with tags , , , , , , , on March 31, 2021 by xi'an

While trying to convey to an OP on X validated why the inversion method was not always the panacea in pseudo-random generation, I took the example of a mixture of K exponential distributions when K is very large, in order to impress (?) upon said OP that solving F(x)=u for such a closed-form cdf F was very costly even when using a state-of-the-art (?) inversion algorithm, like uniroot, since each step involves adding the K terms in the cdf. Selecting the component from the cumulative distribution function on the component proves to be quite fast since using the rather crude


brings a 100-fold improvement over

Q = function(u) uniroot((function(x) F(x) - u), lower = 0, 
    upper = qexp(.999,rate=min(la)))[1] #numerical tail quantile

when K=10⁵, as shown by a benchmark call

         test elapsed
1       compo   0.057
2      Newton  45.736
3     uniroot   5.814

where Newton denotes a simple-minded Newton inversion. I wonder if there is a faster way to select the component in the mixture. Using a while loop starting from the most likely components proves to be much slower. And accept-reject solutions are invariably slow or fail to work with such a large number of components. Devroye’s Bible has a section (XIV.7.5) on simulating sums of variates from an infinite mixture distribution, but, for once,  nothing really helpful. And another section (IV.5) on series methods, where again I could not find a direct connection.

conjugate priors and sufficient statistics

Posted in Statistics with tags , , , , , on March 29, 2021 by xi'an

An X validated question rekindled my interest in the connection between sufficiency and conjugacy, by asking whether or not there was an equivalence between the existence of a (finite dimension) conjugate family of priors and the existence of a fixed (in n, the sample size) dimension sufficient statistic. Outside exponential families, meaning that the support of the sampling distribution need vary with the parameter.

While the existence of a sufficient statistic T of fixed dimension d whatever the (large enough) sample size n seems to clearly imply the existence of a (finite dimension) conjugate family of priors, or rather of a family associated with each possible dominating (prior) measure,

\mathfrak F=\{ \tilde \pi(\theta)\propto \tilde {f_n}(t_n(x_{1:n})|\theta) \pi_0(\theta)\,;\ n\in \mathbb N, x_{1:n}\in\mathfrak X^n\}

the reverse statement is a wee bit more delicate to prove, due to the varying supports of the sampling or prior distributions. Unless some conjugate prior in the assumed family has an unrestricted support, the argument seems to limit sufficiency to a particular subset of the parameter set. I think that the result remains correct in general but could not rigorously wrap up the proof

conjugate of a binomial

Posted in Statistics with tags , , , , , , on March 25, 2021 by xi'an