Archive for Wishart distribution

multiplying a Gaussian matrix and a Gaussian vector

Posted in Books with tags , , , , , on March 2, 2017 by xi'an

This arXived note by Pierre-Alexandre Mattei was actually inspired by one of my blog entries, itself written from a resolution of a question on X validated. The original result about the Laplace distribution actually dates at least to 1932 and a paper by Wishart and Bartlett!I am not sure the construct has clear statistical implications, but it is nonetheless a good calculus exercise.

The note produces an extension to the multivariate case. Where the Laplace distribution is harder to define, in that multiple constructions are possible. The current paper opts for a definition based on the characteristic function. Which leads to a rather unsavoury density with Bessel functions. It however satisfies the constructive definition of being a multivariate Normal multiplied by a χ variate plus a constant vector multiplied by the same squared χ variate. It can also be derived as the distribution of


when W is a (p,q) matrix with iid Gaussian columns and y is a Gaussian vector with independent components. And μ is a vector of the proper dimension. When μ=0 the marginals remain Laplace.

corrected MCMC samplers for multivariate probit models

Posted in Books, pictures, R, Statistics, University life with tags , , , , , , , , on May 6, 2015 by xi'an

“Moreover, IvD point out an error in Nobile’s derivation which can alter its stationary distribution. Ironically, as we shall see, the algorithms of IvD also contain an error.”

 Xiyun Jiao and David A. van Dyk arXived a paper correcting an MCMC sampler and R package MNP for the multivariate probit model, proposed by Imai and van Dyk in 2005. [Hence the abbreviation IvD in the above quote.] Earlier versions of the Gibbs sampler for the multivariate probit model by Rob McCulloch and Peter Rossi in 1994, with a Metropolis update added by Agostino Nobile, and finally an improved version developed by Imai and van Dyk in 2005. As noted in the above quote, Jiao and van Dyk have discovered two mistakes in this latest version, jeopardizing the validity of the output.

IvDykThe multivariate probit model considered here is a multinomial model where the occurrence of the k-th category is represented as the k-th component of a (multivariate) normal (correlated) vector being the largest of all components. The latent normal model being non-identifiable since invariant by either translation or scale, identifying constraints are used in the literature. This means using a covariance matrix of the form Σ/trace(Σ), where Σ is an inverse Wishart random matrix. In their 2005 implementation, relying on marginal data augmentation—which essentially means simulating the non-identifiable part repeatedly at various steps of the data augmentation algorithm—, Imai and van Dyk missed a translation term and a constraint on the simulated matrices that lead to simulations outside the rightful support, as illustrated from the above graph [snapshot from the arXived paper].

IvDyk1Since the IvD method is used in many subsequent papers, it is quite important that these mistakes are signalled and corrected. [Another snapshot above shows how much both algorithm differ!] Without much thinking about this, I [thus idly] wonder why an identifying prior is not taking the place of a hard identifying constraint, as it should solve the issue more nicely. In that it would create less constraints and more entropy (!) in exploring the augmented space, while theoretically providing a convergent approximation of the identifiable parts. I may (must!) however miss an obvious constraint preventing this implementation.

Estimation of covariance matrices

Posted in Statistics with tags , , , , , on June 21, 2011 by xi'an

Mathilde Bouriga and Olivier Féron have posted a paper on arXiv centred on the estimation of covariance matrices using inverse-Wishart priors. They introduce hyperpriors on the hyperparameters in the spirit of Daniels and Kass (JASA, 1999) and derive Bayes estimators as well as MCMC procedures. They then run a simulation comparison between the different priors in terms of frequentist risks, concluding in favour of the shrinkage covariance estimators that shrink all components of the empirical covariance matrix. (This paper is part of Mathilde’s thesis, which I co-advise with Jean-Michel Marin.)

More among interesting postings on arXiv, many of them published in Statistical Science: