Wishart distribution
Notation  

Parameters  degrees of freedom (real) scale matrix ( pos. def) 
Support  positive definite matrix 


Mean  
Mode  
Variance  
Entropy  see below 
CF 
In statistics, the Wishart distribution is a generalization to multiple dimensions of the chisquared distribution, or, in the case of noninteger degrees of freedom, of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928.^{1}
It is any of a family of probability distributions defined over symmetric, nonnegativedefinite matrixvalued random variables (“random matrices”). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariancematrix of a multivariatenormal randomvector.
Contents
 1 Definition
 2 Occurrence
 3 Probability density function
 4 Use in Bayesian statistics
 5 Properties
 6 Theorem
 7 Estimator of the multivariate normal distribution
 8 Bartlett decomposition
 9 Marginal distribution of matrix elements
 10 The possible range of the shape parameter
 11 Relationships to other distributions
 12 See also
 13 References
 14 External links
Definition
Suppose X is an n × p matrix, each row of which is independently drawn from a pvariate normal distribution with zero mean:
Then the Wishart distribution is the probability distribution of the p×p random matrix
known as the scatter matrix. One indicates that S has that probability distribution by writing
The positive integer n is the number of degrees of freedom. Sometimes this is written W(V, p, n). For n ≥ p the matrix S is invertible with probability 1 if V is invertible.
If p = 1 and V = 1 then this distribution is a chisquared distribution with n degrees of freedom.
Occurrence
The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution.^{citation needed} It occurs frequently in likelihoodratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices^{citation needed} and in multidimensional Bayesian analysis.^{citation needed}
Probability density function
The Wishart distribution can be characterized by its probability density function as follows:
Let be a p × p symmetric matrix of random variables that is positive definite. Let be a (fixed) positive definite matrix of size p × p.
Then, if n ≥ p, has a Wishart distribution with n degrees of freedom if it has a probability density function given by
where Γ_{p}(·) is the multivariate gamma function defined as
In fact the above definition can be extended to any real n > p − 1. If n ≤ p − 1, then the Wishart no longer has a density—instead it represents a singular distribution that takes values in a lowerdimension subspace of the space of p × p matrices. ^{2}
Use in Bayesian statistics
In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix , where is the covariance matrix.
Choice of W
The least informative, proper Wishart prior is obtained by setting .
The prior mean of is . This implies that a good choice for is , where is some prior guess for the covariance matrix.
Properties
Logexpectation
Note the following formula:^{3}
where ψ is the digamma function (the derivative of the log of the gamma function).
This plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution.
Entropy
The information entropy of the distribution has the following formula:^{3}
where is the normalizing constant of the distribution:
This can be expanded as follows:
Characteristic function
The characteristic function of the Wishart distribution is
In other words,
where E[⋅] denotes expectation. (Here Θ and I are matrices the same size as V (I is the identity matrix); and i is the square root of −1).^{4}
Theorem
If a p × p random matrix has a Wishart distribution with m degrees of freedom and variance matrix — write — and is a q × p matrix of rank q, then ^{5}
Corollary 1
If is a nonzero constant vector, then^{5} .
In this case, is the chisquared distribution and (note that is a constant; it is positive because is positive definite).
Corollary 2
Consider the case where (that is, the jth element is one and all others zero). Then corollary 1 above shows that
gives the marginal distribution of each of the elements on the matrix's diagonal.
Noted statistician George Seber points out^{citation needed} that the Wishart distribution is not called the “multivariate chisquared distribution” because the marginal distribution of the offdiagonal elements is not chisquared. Seber prefers^{citation needed} to reserve the term multivariate for the case when all univariate marginals belong to the same family.
Estimator of the multivariate normal distribution
The Wishart distribution is the sampling distribution of the maximumlikelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution.^{6} A derivation of the MLE uses the spectral theorem.
Bartlett decomposition
The Bartlett decomposition of a matrix from a pvariate Wishart distribution with scale matrix and n degrees of freedom is the factorization:
where is the Cholesky factor of , and:
where and independently.^{7} This provides a useful method for obtaining random samples from a Wishart distribution.^{8}
Marginal distribution of matrix elements
Let be a 2 × 2 variance matrix characterized by correlation coefficient and its lower Cholesky factor:
Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is
The diagonal elements, most evidently in the first element, follow the distribution with n degrees of freedom (scaled by ) as expected. The offdiagonal element is less familiar but can be identified as a normal variancemean mixture where the mixing density is a distribution. The corresponding marginal probability density for the offdiagonal element is therefore the variancegamma distribution
where is the modified Bessel function of the second kind.^{9} Similar results may be found for higher dimensions, but the interdependence of the offdiagonal correlations becomes increasingly complicated. It is also possible to write down the momentgenerating function even in the noncentral case (essentially the nth power of Craig (1936)^{10} equation 10) although the probability density becomes an infinite sum of Bessel functions.
The possible range of the shape parameter
It can be shown ^{11} that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set
This set is named after Gindikin, who introduced it^{12} in the seventies in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,
the corresponding Wishart distribution has no Lebesgue density.
Relationships to other distributions
 The Wishart distribution is related to the InverseWishart distribution, denoted by , as follows: If and if we do the change of variables , then . This relationship may be derived by noting that the absolute value of the Jacobian determinant of this change of variables is , see for example equation (15.15) in.^{13}
 In Bayesian statistics, the Wishart distribution is a conjugate prior for the precision parameter of the multivariate normal distribution, when the mean parameter is known.^{14}
 A generalization is the multivariate gamma distribution.
 A different type of generalization is the normalWishart distribution, essentially the product of a multivariate normal distribution with a Wishart distribution.
See also
References
 ^ Wishart, J. (1928). "The generalised product moment distribution in samples from a normal multivariate population". Biometrika 20A (1–2): 32–52. doi:10.1093/biomet/20A.12.32. JFM 54.0565.02. JSTOR 2331939.
 ^ Uhlig, H. (1994). "On Singular Wishart and Singular Multivariate Beta Distributions". The Annals of Statistics 22: 395. doi:10.1214/aos/1176325375.
 ^ ^{a} ^{b} C.M. Bishop, Pattern Recognition and Machine Learning, Springer 2006, p. 693.
 ^ Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 259. ISBN 0471360910.
 ^ ^{a} ^{b} Rao, C. R., Linear statistical inference and its applications, Wiley 1965, p. 535.
 ^ C. Chatfield and A. J. Collins, 1980,"Introduction to Multivariate Analysis" p.103108
 ^ Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 257. ISBN 0471360910.
 ^ Smith, W. B.; Hocking, R. R. (1972). "Algorithm AS 53: Wishart Variate Generator". Journal of the Royal Statistical Society, Series C 21 (3): 341–345. JSTOR 2346290.
 ^ Pearson, Karl; Jeffery, G. B.; Elderton, Ethel M. (December 1929). "On the Distribution of the First Product MomentCoefficient, in Samples Drawn from an Indefinitely Large Normal Population". Biometrika (Biometrika Trust) 21: 164–201. doi:10.2307/2332556. JSTOR 2332556.
 ^ Craig, Cecil C. (1936). "On the Frequency Function of xy". Ann. Math. Statist. 7: 1–15. doi:10.1214/aoms/1177732541.
 ^ Peddada and Richards, Shyamal Das; Richards, Donald St. P. (1991). "Proof of a Conjecture of M. L. Eaton on the Characteristic Function of the Wishart Distribution,". Annals of Probability 19 (2): 868–874. doi:10.1214/aop/1176990455.
 ^ Gindikin, S.G. (1975). "Invariant generalized functions in homogeneous domains,". Funct. Anal. Appl., 9 (1): 50–52. doi:10.1007/BF01078179.
 ^ Paul S. Dwyer, “SOME APPLICATIONS OF MATRIX DERIVATIVES IN MULTIVARIATE ANALYSIS”, JASA 1967; 62:607625, available JSTOR.
 ^ C.M. Bishop, Pattern Recognition and Machine Learning, Springer 2006.
External links
HPTS  Area Progetti  EduSoft  JavaEdu  N.Saperi  Ass.Scuola..  TS BCTV  TS VideoRes  TSODP  TRTWE  