Archive for chi-square density

inverse Gaussian trick [or treat?]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , , , , , , on October 29, 2020 by xi'an

When preparing my mid-term exam for my undergrad mathematical statistics course, I wanted to use the inverse Gaussian distribution IG(μ,λ) as an example of exponential family and include a random generator question. As shown above by a Fortran computer code from Michael, Schucany and Haas, a simple version can be based on simulating a χ²(1) variate and solving in x the following second degree polynomial equation

\dfrac{\lambda(x-\mu)^2}{\mu^2 x} = v

since the left-hand side transform is distributed as a χ²(1) random variable. The smallest root x¹, less than μ, is then chosen with probability μ/(μ+x¹) and the largest one, x²=μ²/x¹ with probability x¹/(μ+x¹). A relatively easy question then, except when one considers asking for the proof of the χ²(1) result, which proved itself to be a harder cookie than expected! The paper usually referred to for the result, Schuster (1968), is quite cryptic on the matter, essentially stating that the above can be expressed as the (bijective) transform of Y=min(X,μ²/X) and that V~χ²(1) follows immediately. I eventually worked out a proof by the “law of the unconscious statistician” [a name I do not find particularly amusing!], but did not include the question in the exam. But I found it fairly interesting that the inverse Gaussian can be generating by “inverting” the above equation, i.e. going from a (squared) Gaussian variate V to the inverse Gaussian variate X. (Even though the name stems from the two cumulant generating functions being inverses of one another.)

arbitrary distributions with set correlation

Posted in Books, Kids, pictures, R, Statistics, University life with tags , , , , , , , , , , on May 11, 2015 by xi'an

A question recently posted on X Validated by Antoni Parrelada: given two arbitrary cdfs F and G, how can we simulate a pair (X,Y) with marginals  F and G, and with set correlation ρ? The answer posted by Antoni Parrelada was to reproduce the Gaussian copula solution: produce (X’,Y’) as a Gaussian bivariate vector with correlation ρ and then turn it into (X,Y)=(F⁻¹(Φ(X’)),G⁻¹(Φ(Y’))). Unfortunately, this does not work, because the correlation does not keep under the double transform. The graph above is part of my answer for a χ² and a log-Normal cdf for F amd G: while corr(X’,Y’)=ρ, corr(X,Y) drifts quite a  lot from the diagonal! Actually, by playing long enough with my function

tacor=function(rho=0,nsim=1e4,fx=qnorm,fy=qnorm)
{
  x1=rnorm(nsim);x2=rnorm(nsim)
  coeur=rho
  rho2=sqrt(1-rho^2)
  for (t in 1:length(rho)){
     y=pnorm(cbind(x1,rho[t]*x1+rho2[t]*x2))
     coeur[t]=cor(fx(y[,1]),fy(y[,2]))}
  return(coeur)
}

Playing further, I managed to get an almost flat correlation graph for the admittedly convoluted call

tacor(seq(-1,1,.01),
      fx=function(x) qchisq(x^59,df=.01),
      fy=function(x) qlogis(x^59))

zerocorNow, the most interesting question is how to produce correlated simulations. A pedestrian way is to start with a copula, e.g. the above Gaussian copula, and to twist the correlation coefficient ρ of the copula until the desired correlation is attained for the transformed pair. That is, to draw the above curve and invert it. (Note that, as clearly exhibited by the graph just above, all desired correlations cannot be achieved for arbitrary cdfs F and G.) This is however very pedestrian and I wonder whether or not there is a generic and somewhat automated solution…

Sufficiency [BC]

Posted in Books, Statistics with tags , , , , on May 10, 2011 by xi'an

Here is an email I received about The Bayesian Choice a few days ago:

I am an undergraduate student in Japan. I am self-studying your classical book The Bayesian Choice. The book is wonderful with many instructive examples. Although it is a little bit hard for me right now, I think it will be very useful for my future research.

There is one point that I do not understand in Example 1.3.2 (p.14-15). I know a standard result that the sample mean and sample variance are independent, with the sample mean follows

\mathcal{N}(\mu,(1/n)\sigma^2)

while s^2/\sigma^2 follows a chi-square of n-1 degree of freedom. In this example is it correct that one must factorize the likelihood function to g(T(x)|\theta) which must be the product of these two normal and chi-square densities, and h(x|T(x)) which is free of \theta ?

In the book I do not see why $g(T(x) | \theta)$ is the product of normal and chi-square densities. The first part correctly corresponds to the density of \mathcal{N}(\mu,(1/n)*\sigma^2) . But the second part is not the density of n-1 degree of freedom chi-square of s^2/\sigma^2.

The example, as often, skips a lot of details, meaning that when one starts from the likelihood

\sigma^{-n} e^{-(\bar x-\theta)^2 n/2\sigma^2} \, e^{-s^2/2\sigma^2} / (2\pi)^n,

this expression only depends on T(x). Furthermore, it involves the normal density on \bar x and part of the chi-square density on . One can then plug in the missing power of to make g(T(x)|\theta) appear. The extra terms are then canceled by a function we can call h(x|T(x))However, there is a typo in this example in that \sigma^n in the chi-square density should be \sigma^{n-1}!