Archive for inverse Gaussian distribution

a rush to grade

Posted in Kids, pictures, Statistics, University life with tags , , , , , , , , , , on October 29, 2020 by xi'an

inverse Gaussian trick [or treat?]

Posted in Books, Kids, R, Statistics, University life with tags , , , , , , , , , , , , , , on October 29, 2020 by xi'an

When preparing my mid-term exam for my undergrad mathematical statistics course, I wanted to use the inverse Gaussian distribution IG(μ,λ) as an example of exponential family and include a random generator question. As shown above by a Fortran computer code from Michael, Schucany and Haas, a simple version can be based on simulating a χ²(1) variate and solving in x the following second degree polynomial equation

\dfrac{\lambda(x-\mu)^2}{\mu^2 x} = v

since the left-hand side transform is distributed as a χ²(1) random variable. The smallest root x¹, less than μ, is then chosen with probability μ/(μ+x¹) and the largest one, x²=μ²/x¹ with probability x¹/(μ+x¹). A relatively easy question then, except when one considers asking for the proof of the χ²(1) result, which proved itself to be a harder cookie than expected! The paper usually referred to for the result, Schuster (1968), is quite cryptic on the matter, essentially stating that the above can be expressed as the (bijective) transform of Y=min(X,μ²/X) and that V~χ²(1) follows immediately. I eventually worked out a proof by the “law of the unconscious statistician” [a name I do not find particularly amusing!], but did not include the question in the exam. But I found it fairly interesting that the inverse Gaussian can be generating by “inverting” the above equation, i.e. going from a (squared) Gaussian variate V to the inverse Gaussian variate X. (Even though the name stems from the two cumulant generating functions being inverses of one another.)

invariant conjugate analysis for exponential families

Posted in Books, Statistics, University life with tags , , on December 10, 2013 by xi'an

Here is a paper from Bayesian Analysis that I somehow missed and only become aware thanks to a (more) recent paper of the first author: in 2012, Pierre Druilhet and Denis Pommeret published invariant conjugate analysis for exponential families. The authors define a new class of conjugate families, called Jeffreys’ conjugate priors (JCP) by using Jeffreys’ prior as the reference density (rather than the uniform in regular conjugate families). Following from the earlier proposal of Druilhet and Marin (2007, BA). Both families of course coincide in the case of quadratic variance exponential families. The motivation for using those new conjugate priors is that the family is invariant by a change of parametrisation. And to include Jeffreys’ prior as a special case of conjugate prior. In the special case of the inverse Gaussian distribution, this approach leads to the conjugacy of the inverse normal distribution, a feature I noticed in 1991 when working on an astronomy project. There are two obvious drawbacks to those new conjugate families: one is that the priors are not longer always proper. The other one is that the computations associated with those new priors are more involved, which may explain why the authors propose the MAP as their default estimator. Since posterior expectations of the mean (in the natural representation [in x] of the exponential family) are no longer linear in x.

%d bloggers like this: