What are the distributions on the positive k-dimensional quadrant with parametrizable covariance matrix? (bis)

Wondering about the question I posted on Friday (on StackExchange, no satisfactory answer so far!), I looked further at the special case of the gamma distribution I suggested at the end. Starting from the moment conditions,

\dfrac{\alpha_{11}}{\beta_1} = \mu_1\,,\quad \dfrac{\alpha_{11}}{\beta_1^2} = \sigma_1^2\,,

\dfrac{\alpha_{21}\mu_1+\alpha_{22}}{\beta_2} = \mu_2\,,\quad \dfrac{\alpha_{21}^2\sigma^2_1}{\beta_2^2}+\dfrac{\alpha_{21}\mu_1+\alpha_{22}}{\beta_2^2} = \sigma^2_2\,,

and

\dfrac{\alpha_{21}(\sigma^2_1+\mu_1^2)+\alpha_{22}\mu_1}{\beta_2} = \sigma_{12}+\mu_1\mu_2

the [corrected, thanks to David Epstein!] solution is (hopefully) given by the system

\begin{cases} \beta_1 =\mu_1/\sigma_1^2&\\  \alpha_{11}-\mu_1\beta_1 =0&\\  \alpha_{22} = \mu_2\beta_2 - \alpha_{21}\mu_1&\\  \alpha_{21} = \dfrac{\sigma_{12}}{\sigma^2_1}\beta_2\\ \sigma_{12}^2+ \dfrac{\sigma_1^2\mu_2-\sigma_{12}\mu_1}{\beta_2} = \sigma_1^2\sigma^2_2&\\  \end{cases}

The resolution of this system obviously imposes conditions on those moments, like

\sigma^2_1\mu_2-\sigma_{12}\mu_1 >0

So I ran a small R experiment checking when there was no acceptable solution to the system. I started with five moments that satisfied the basic Stieltjes and determinant conditions

# basically anything
mu=runif(2,0,10)
# Jensen inequality
sig=c(mu[1]^2/runif(1),mu[2]^2/runif(1))
# my R code returning the solution if any
sol(mu,c(sig,runif(1,-sqrt(prod(sig)),sqrt(prod(sig)))))

and got a fair share (20%) of rejections, e.g.

> sol(mu,c(sig,runif(1,-sqrt(prod(sig)),sqrt(prod(sig)))))
$solub
[1] FALSE

$alpha
[1]  0.8086944  0.1220291 -0.1491023

$beta
[1] 0.1086459 0.5320866

However, not being sure about the constraints on the five moments I am now left with another question: what are the necessary and sufficient conditions on the five moments of a pair of positive vectors?! Or, more generally, what are the necessary and sufficient conditions on the k-dimensional μ and Σ for them to be first and second moments of a positive k-dimensional vector?

7 Responses to “What are the distributions on the positive k-dimensional quadrant with parametrizable covariance matrix? (bis)”

  1. […] Universidade de São Paulo, Brazil) has posted an answer to my earlier question both as a comment on the ‘Og and as a solution on StackOverflow (with a much more readable LaTeX output). His solution is based […]

  2. Dear Xi’an,

    I’m not sure if the following helps with your question. Suppose that we have a multivariate normal random vector
    (\log X_1,\dots,\log X_k) \sim N(\mu,\Sigma) \, ,
    with \mu\in\mathbb{R}^k and $k\times k$ symmetric positive definite matrix \Sigma=(\sigma_{ij}).

    For this lognormal (X_1,\dots,X_k) we have
    m_i := E[X_i] = e^{\mu_i + \sigma_{ii}/2} \, , \quad i=1,\dots,k\, ,
    c_{ij} := Cov[X_i,X_j] = m_i \,m_j \,(e^{\sigma_{ij}} - 1) \, , \quad i,j=1,\dots,k\, ,
    and it follows that c_{ij}\ge -m_im_j.

    So, we can ask the converse question: given m=(m_1,\dots,m_k)\in\mathbb{R}^k_+ and k\times k symmetric positive definite matrix C=(c_{ij}), satisfying c_{ij}>-m_im_j, if we let
    \mu_i = \log m_i - \frac{1}{2} \log\left(\frac{c_{ii}}{m_i^2} + 1 \right) \, , \quad i=1,\dots,k \, ,
    \sigma_{ij} = \log\left(\frac{c_{ij}}{m_i m_j} + 1 \right) \, , \quad i,j=1,\dots,k \, ,
    we will have a lognormal vector with the prescribed means and covariances.

    Regards,

    Paulo.

    P.S. The constraint on C is equivalent to \mathbb{E}[X_i X_j]\ge 0.

  3. I have opened a question on math.se http://math.stackexchange.com/questions/127813/what-are-the-restrictions-on-the-covariance-matrix-of-a-nonnegative-multivariate

    I suspect that someone there may be able to answer your initial question as well.

    • Thank you: I was planning to post this question myself, and would have preferred to do so, but this is not a major problem! We will see if this attracts more answers than my original question.

      • I apologize.

        One comment: Your R code does not seem to restrict the covariance of the two variables to be greater than -mu[1]mu[2]. This follows from Cov(X,Y) = E(XY)-E(X)E(Y) and the fact that XY is always nonnegative.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.