Wishart distribution

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Wishart
Notation X ~ Wp(V, n)
Parameters n > p − 1 degrees of freedom (real)
V > 0 scale matrix (p × p pos. def)
Support X(p × p) positive definite matrix
PDF \frac{|\mathbf{X}|^{\frac{n-p-1}{2}} e^{-\frac{{\rm tr}(\mathbf{V}^{-1}\mathbf{X})}{2}}}{2^\frac{np}{2}|{\mathbf V}|^\frac{n}{2}\Gamma_p(\frac{n}{2})}
Mean \operatorname{E}[X]=nV
Mode (np − 1)V for np + 1
Variance \operatorname{Var}(\mathbf{X}_{ij}) = n \left (v_{ij}^2+v_{ii}v_{jj} \right )
Entropy see below
CF \Theta \mapsto \left|{\mathbf I} - 2i\,{\mathbf\Theta}{\mathbf V}\right|^{-\frac{n}{2}}

In statistics, the Wishart distribution is a generalization to multiple dimensions of the chi-squared distribution, or, in the case of non-integer degrees of freedom, of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.[1]

It is a family of probability distributions defined over symmetric, nonnegative-definite matrix-valued random variables (“random matrices”). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector.

Definition

Suppose X is an n × p matrix, each row of which is independently drawn from a p-variate normal distribution with zero mean:

X_{(i)}{=}(x_i^1,\dots,x_i^p)\sim N_p(0,V).

Then the Wishart distribution is the probability distribution of the p × p random matrix S = XT X known as the scatter matrix. One indicates that S has that probability distribution by writing

S\sim W_p(V,n).

The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For np the matrix S is invertible with probability 1 if V is invertible.

If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.

Occurrence

The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices[citation needed] and in multidimensional Bayesian analysis.[2] It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels .[3]

Probability density function

The Wishart distribution can be characterized by its probability density function as follows:

Let X be a p × p symmetric matrix of random variables that is positive definite. Let V be a (fixed) positive definite matrix of size p × p.

Then, if np, X has a Wishart distribution with n degrees of freedom if it has a probability density function given by

\frac{1}{2^\frac{np}{2}\left|{\mathbf V}\right|^\frac{n}{2}\Gamma_p(\frac{n}{2})} {\left|\mathbf{X}\right|}^{\frac{n-p-1}{2}} e^{-\frac{1}{2}{\rm tr}({\mathbf V}^{-1}\mathbf{X})}

where \left|{\mathbf X}\right| denotes determinant and Γp(·) is the multivariate gamma function defined as

\Gamma_p \left (\tfrac{n}{2} \right )= \pi^{\frac{p(p-1)}{4}}\Pi_{j=1}^p \Gamma\left ( \tfrac{n}{2} + \tfrac{1-j}{2} \right ).

In fact the above definition can be extended to any real n > p − 1. If np − 1, then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lower-dimension subspace of the space of p × p matrices.[4]

Use in Bayesian statistics

In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix Ω = Σ−1, where Σ is the covariance matrix.

Choice of parameters

The least informative, proper Wishart prior is obtained by setting n = p.[citation needed]

The prior mean of Wp(V, n) is nV, suggesting that a reasonable choice for V−1 would be nΣ0, where Σ0 is some prior guess for the covariance matrix.

Properties

Log-expectation

Note the following formula:[5]

\operatorname{E}[\ln|\mathbf{X}|] =  \psi_p(n/2) + p\ln(2) + \ln|\mathbf{V}|

where \psi_p is the multivariate digamma function (the derivative of the log of the multivariate gamma function).

This plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution.

Entropy

The information entropy of the distribution has the following formula:[5]

\operatorname{H}[\mathbf{X}] = -\ln \left (B(\mathbf{V},n) \right ) -\frac{n-p-1}{2} \operatorname{E}[\ln|\mathbf{X}|] + \frac{np}{2}

where B(V, n) is the normalizing constant of the distribution:

B(\mathbf{V},n) = \frac{1}{\left|\mathbf{V}\right|^\frac{n}{2} 2^\frac{np}{2}\Gamma_p(\frac{n}{2})}

This can be expanded as follows:

\begin{align}
\operatorname{H}[\mathbf{X}] &= \tfrac{n}{2}\ln|\mathbf{V}| +\tfrac{np}{2}\ln(2) + \ln\Gamma_p(\tfrac{n}{2}) -\tfrac{n-p-1}{2} \operatorname{E}[\ln|\mathbf{X}|] + \tfrac{np}{2} \\
&= \tfrac{n}{2}\ln|\mathbf{V}| + \tfrac{np}{2}\ln(2)  + \ln\Gamma_p(\tfrac{n}{2}) -\tfrac{n-p-1}{2}\left( \psi_p\left(\tfrac{n}{2}\right) + p\ln(2) + \ln|\mathbf{V}|\right) + \tfrac{np}{2} \\
&= \tfrac{n}{2}\ln|\mathbf{V}| + \tfrac{np}{2}\ln(2) + \ln\Gamma_p(\tfrac{n}{2}) - \tfrac{n-p-1}{2}\psi_p\left(\tfrac{n}{2}\right)  - \frac{n-p-1}{2} \left(p\ln(2) +\ln|\mathbf{V}| \right ) + \tfrac{np}{2} \\
&= \tfrac{p+1}{2}\ln|\mathbf{V}| + \tfrac{1}{2}p(p+1)\ln(2) + \ln\Gamma_p(\tfrac{n}{2}) - \tfrac{n-p-1}{2}\psi_p\left(\tfrac{n}{2}\right)  + \tfrac{np}{2}
\end{align}

Cross-entropy

The cross entropy of two Wishart distributions p_0 with parameters n_0, V_0 and p_1 with parameters n_1, V_1 is

\begin{align}
H(p_0, p_1) &= \operatorname{E}_{p_0}[-\log p_1]\\
&= \operatorname{E}_{p_0}\left[-\log \frac{|\mathbf{X}|^{\frac{n_1 - p - 1}{2}} e^{-\frac{\mathrm{tr}(\mathbf{V}_1^{-1} \mathbf{X})}{2}}}{2^{\frac{n_1 p}{2}} |\mathbf{V}_1|^{\frac{n_1}{2}} \Gamma_p(\tfrac{n_1}{2})}\right]\\
&= \tfrac{n_1 p}{2} \log 2 + \tfrac{n_1}{2} \log |\mathbf{V}_1| + \log \Gamma_p(\tfrac{n_1}{2}) - \tfrac{n_1 - p - 1}{2} \operatorname{E}_{p_0}[\log |\mathbf{X}|] + \tfrac{1}{2}\operatorname{E}_{p_0}[\mathrm{tr}(\mathbf{V}_1^{-1}\mathbf{X})] \\
&= \tfrac{n_1 p}{2} \log 2 + \tfrac{n_1}{2} \log |\mathbf{V}_1| + \log \Gamma_p(\tfrac{n_1}{2}) - \tfrac{n_1 - p - 1}{2} \left( \psi_p(\tfrac{n_0}{2}) + p \log 2 + \log |\mathbf{V}_0|\right)+ \tfrac{1}{2}\mathrm{tr}(\mathbf{V}_1^{-1} n_0 \mathbf{V}_0) \\
&=-\tfrac{n_1}{2} \log |\mathbf{V}_1^{-1} \mathbf{V}_0| +  \tfrac{p+1}{2} \log |\mathbf{V}_0| + \tfrac{n_0}{2}\mathrm{tr}(\mathbf{V}_1^{-1} \mathbf{V}_0)+ \log \Gamma_p(\tfrac{n_1}{2}) - \tfrac{n_1 - p - 1}{2}  \psi_p(\tfrac{n_0}{2})  +  \tfrac{p(p+1)}{2} \log 2 \\
\end{align}

Note that when p_0=p_1 we recover the entropy.

KL-divergence

The Kullback–Leibler divergence of p_1 from p_0 is


D_{KL}(p_0 \| p_1) = H(p_0, p_1) - H(p_0) =-\tfrac{n_1}{2} \log |\mathbf{V}_1^{-1} \mathbf{V}_0|  + \tfrac{n_0}{2}(\mathrm{tr}(\mathbf{V}_1^{-1} \mathbf{V}_0) - p)+ \log \frac{\Gamma_p(\tfrac{n_1}{2})}{\Gamma_p(\tfrac{n_0}{2})} + \tfrac{n_0 - n_1 }{2}  \psi_p(\tfrac{n_0}{2})

Characteristic function

The characteristic function of the Wishart distribution is

\Theta \mapsto \left|{\mathbf I} - 2i\,{\mathbf\Theta}{\mathbf V}\right|^{-\frac{n}{2}}.

In other words,

\Theta \mapsto \operatorname{E}\left [ \mathrm{exp}\left (i \mathrm{tr}(\mathbf{X}{\mathbf\Theta})\right )\right ] = \left|{\mathbf I} - 2i{\mathbf\Theta}{\mathbf V}\right|^{-\frac{n}{2}}

where E[⋅] denotes expectation. (Here Θ and I are matrices the same size as V(I is the identity matrix); and i is the square root of −1).[6]

Theorem

If a p × p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V — write \mathbf{X}\sim\mathcal{W}_p({\mathbf V},m) — and C is a q × p matrix of rank q, then [7]

\mathbf{C}\mathbf{X}{\mathbf C}^T \sim \mathcal{W}_q\left({\mathbf C}{\mathbf V}{\mathbf C}^T,m\right).

Corollary 1

If z is a nonzero p × 1 constant vector, then:[7]

{\mathbf z}^T\mathbf{X}{\mathbf z}\sim\sigma_z^2\chi_m^2.

In this case, \chi_m^2 is the chi-squared distribution and \sigma_z^2={\mathbf z}^T{\mathbf V}{\mathbf z} (note that \sigma_z^2 is a constant; it is positive because V is positive definite).

Corollary 2

Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary 1 above shows that

w_{jj}\sim\sigma_{jj}\chi^2_m

gives the marginal distribution of each of the elements on the matrix's diagonal.

Noted statistician George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family.[8]

Estimator of the multivariate normal distribution

The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution.[9] A derivation of the MLE uses the spectral theorem.

Bartlett decomposition

The Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization:

\mathbf{X} = {\textbf L}{\textbf A}{\textbf A}^T{\textbf L}^T,

where L is the Cholesky factor of V, and:

\mathbf A = \begin{pmatrix}
c_1 & 0 & 0 & \cdots & 0\\
n_{21} & c_2 &0 & \cdots& 0 \\
n_{31} & n_{32} & c_3 & \cdots & 0\\
\vdots & \vdots & \vdots &\ddots & \vdots \\
n_{p1} & n_{p2} & n_{p3} &\cdots & c_p
\end{pmatrix}

where c_i^2 \sim \chi^2_{n-i+1} and nij ~ N(0, 1) independently.[10] This provides a useful method for obtaining random samples from a Wishart distribution.[11]

Marginal distribution of matrix elements

Let V be a 2 × 2 variance matrix characterized by correlation coefficient −1 < ρ < 1 and L its lower Cholesky factor:

\mathbf{V} = \begin{pmatrix}
\sigma_1^2 & \rho \sigma_1 \sigma_2 \\
\rho \sigma_1 \sigma_2 & \sigma_2^2
\end{pmatrix},
\qquad
\mathbf{L} = \begin{pmatrix}
\sigma_1 & 0 \\
\rho \sigma_2 & \sqrt{1-\rho^2} \sigma_2
\end{pmatrix}

Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is

\mathbf{X} = \begin{pmatrix}
\sigma_1^2 c_1^2 & \sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt{1-\rho^2} c_1 n_{21} \right ) \\
\sigma_1 \sigma_2 \left (\rho c_1^2 + \sqrt{1-\rho^2} c_1 n_{21} \right ) & \sigma_2^2 \left(\left (1-\rho^2 \right ) c_2^2 + \left (\sqrt{1-\rho^2} n_{21} + \rho c_1 \right )^2 \right)
\end{pmatrix}

The diagonal elements, most evidently in the first element, follow the χ2 distribution with n degrees of freedom (scaled by σ2) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a χ2 distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution

f(x_{12}) =  \frac{\left | x_{12} \right |^{\frac{n-1}{2}}}{\Gamma\left(\frac{n}{2}\right) \sqrt{2^{n-1} \pi \left (1-\rho^2 \right ) \left (\sigma_1 \sigma_2 \right )^{n+1}}} \cdot K_{\frac{n-1}{2}} \left(\frac{\left |x_{12} \right |}{\sigma_1 \sigma_2 \left (1-\rho^2 \right )}\right) \exp{\left(\frac{\rho x_{12}}{\sigma_1 \sigma_2 (1-\rho^2)}\right)}

where Kν(z) is the modified Bessel function of the second kind.[12] Similar results may be found for higher dimensions, but the interdependence of the off-diagonal correlations becomes increasingly complicated. It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936)[13] equation 10) although the probability density becomes an infinite sum of Bessel functions.

The possible range of the shape parameter

It can be shown [14] that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set

\Lambda_p:=\{0,\cdots,p-1\}\cup \left(p-1,\infty\right).

This set is named after Gindikin, who introduced it[15] in the seventies in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,

\Lambda_p^*:=\{0, \cdots, p-1\},

the corresponding Wishart distribution has no Lebesgue density.

Relationships to other distributions

See also

<templatestyles src="Div col/styles.css"/>

References

  1. Lua error in package.lua at line 80: module 'strict' not found.
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 C.M. Bishop, Pattern Recognition and Machine Learning, Springer 2006, p. 693.
  6. Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 Lua error in package.lua at line 80: module 'strict' not found.
  8. Lua error in package.lua at line 80: module 'strict' not found.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. Lua error in package.lua at line 80: module 'strict' not found.
  12. Lua error in package.lua at line 80: module 'strict' not found.
  13. Lua error in package.lua at line 80: module 'strict' not found.
  14. Lua error in package.lua at line 80: module 'strict' not found.
  15. Lua error in package.lua at line 80: module 'strict' not found.
  16. Lua error in package.lua at line 80: module 'strict' not found.
  17. Lua error in package.lua at line 80: module 'strict' not found.

External links