Maximum entropy probability distribution

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time.

Definition of entropy and differential entropy

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

If X is a discrete random variable with distribution given by

\operatorname{Pr}(X=x_k) = p_k \quad\mbox{ for } k=1,2,\ldots

then the entropy of X is defined as

H(X) = - \sum_{k\ge 1}p_k\log p_k \;.

If X is a continuous random variable with probability density p(x), then the differential entropy of X is defined as[1][2][3]

H(X) = - \int_{-\infty}^\infty p(x)\log p(x) dx\;.

p(x) log p(x) is understood to be zero whenever p(x) = 0.

This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and Differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing H(X) will also maximize the more general forms.

The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy.

Distributions with measured constants

Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints.

Continuous version

Suppose S is a closed subset of the real numbers R and we choose to specify n measurable functions f1,...,fn and n numbers a1,...,an. We consider the class C of all real-valued random variables which are supported on S (i.e. whose density function is zero outside of S) and which satisfy the n expected value conditions

\operatorname{E}(f_j(X)) = a_j\quad\mbox{ for } j=1,\ldots,n

If there is a member in C whose density function is positive everywhere in S, and if there exists a maximal entropy distribution for C, then its probability density p(x) has the following shape:

p(x)=c \exp\left(\sum_{j=1}^n \lambda_j f_j(x)\right)\quad \mbox{ for all } x\in S

where the constants c and λj have to be determined so that the integral of p(x) over S is 1 and the above conditions for the expected values are satisfied. Conversely, if constants c and λj like this can be found, then p(x) is indeed the density of the (unique) maximum entropy distribution for our class C.

Discrete version

Suppose S = {x1,x2,...} is a (finite or infinite) discrete subset of the reals and we choose to specify n functions f1,...,fn and n numbers a1,...,an. We consider the class C of all discrete random variables X which are supported on S and which satisfy the n conditions

\operatorname{E}(f_j(X)) = a_j\quad\mbox{ for } j=1,\ldots,n

If there exists a member of C which assigns positive probability to all members of S and if there exists a maximum entropy distribution for C, then this distribution has the following shape:

\operatorname{Pr}(X=x_k)=c \exp\left(\sum_{j=1}^n \lambda_j f_j(x_k)\right)\quad \mbox{ for } k=1,2,\ldots

where the constants c and λj have to be determined so that the sum of the probabilities is 1 and the above conditions for the expected values are satisfied. Conversely, if constants c and λj like this can be found, then the above distribution is indeed the maximum entropy distribution for our class C.

Proof

This theorem is proved with the calculus of variations and Lagrange multipliers. The constraints can be written as

\int_{-\infty}^{\infty}f_j(x)p(x)dx=a_j

We consider the functional

J(p(x))=-\int_{-\infty}^{\infty} p(x)\ln{p(x)}dx+\lambda_0\left(\int_{-\infty}^{\infty} p(x)dx-1\right)+\sum_{j=1}^{n}\lambda_j\left(\int_{-\infty}^{\infty} f_j(x)p(x)dx-a_j\right)

where the \lambda_j are the Lagrange multipliers. The zeroth constraint ensures the second axiom of probability. The other constraints are that the measurements of the function are given constants up to order n. The entropy attains an extremum when the functional derivative is equal to zero:

\frac{\delta{J(p(x))}}{\delta{p(x)}}=-\ln{p(x)}-1+\lambda_0+\sum_{j=1}^{n}\lambda_j f_j(x)=0

It is an exercise for the reader that this extremum is a maximum. Therefore, the maximum entropy probability distribution in this case must be of the form

p(x)=e^{-1+\lambda_0}\cdot e^{\sum_{j=1}^{n}\lambda_j f_j(x)} = c\cdot \exp\left(\sum_{j=1}^{n}\lambda_j f_j(x)\right) \; .

The proof of the discrete version is essentially the same.

Caveats

Note that not all classes of distributions contain a maximum entropy distribution. It is possible that a class contain distributions of arbitrarily large entropy (e.g. the class of all continuous distributions on R with mean 0 but arbitrary standard deviation), or that the entropies are bounded above but there is no distribution which attains the maximal entropy (e.g. the class of all continuous distributions X on R with E(X) = 0 and E(X2) = E(X3) = 1 (See Cover, Ch 12)). It is also possible that the expected value restrictions for the class C force the probability distribution to be zero in certain subsets of S. In that case our theorem doesn't apply, but one can work around this by shrinking the set S.

Examples of maximum entropy distributions

Every probability distribution is trivially a maximum entropy probability distribution under the constraint that the distribution have its own entropy. To see this, rewrite the density as p(x)=\exp{(\ln{p(x)})} and compare to the expression of the theorem above. By choosing \ln{p(x)} \rightarrow f(x) to be the measurable function and \int \exp{(f(x))} f(x) dx=-H to be the constant, p(x) is the maximum entropy probability distribution under the constraint \int p(x)f(x)dx=-H.

Nontrivial examples are distributions that are subject to multiple constraints that are different from the assignment of the entropy. These are often found by starting with the same procedure \ln{p(x)} \rightarrow f(x) and finding that f(x) can be separated into parts.

A table of examples of maximum entropy distributions is given in Lisman (1972) [4] and Park & Bera (2009)[5]

Uniform and piecewise uniform distributions

The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b], and thus the probability density is 0 outside of the interval. This uniform density can be related to Laplace's principle of indifference, sometimes called the principle of insufficient reason. More generally, if we're given a subdivision a=a0 < a1 < ... < ak = b of the interval [a,b] and probabilities p1,...,pk which add up to one, then we can consider the class of all continuous distributions such that

\operatorname{Pr}(a_{j-1}\le X < a_j) = p_j \quad \mbox{ for } j=1,\ldots,k

The density of the maximum entropy distribution for this class is constant on each of the intervals [aj-1,aj). The uniform distribution on the finite set {x1,...,xn} (which assigns a probability of 1/n to each of these values) is the maximum entropy distribution among all discrete distributions supported on this set.

Positive and specified mean: the exponential distribution

The exponential distribution, for which the density function is

 p(x|\lambda) = \begin{cases}
\lambda e^{-\lambda x} & x \ge 0, \\
0 & x < 0,
\end{cases}

is the maximum entropy distribution among all continuous distributions supported in [0,∞] that have a specified mean of 1/λ.

Specified variance: the normal distribution

The normal distribution N(μ,σ2), for which the density function is


p(x| \mu, \sigma) = \frac{1}{\sigma \sqrt{2\pi} } e^{ -\frac{(x-\mu)^2}{2\sigma^2} },

has maximum entropy among all real-valued distributions with a specified variance σ2 (a particular moment). Therefore, the assumption of normality imposes the minimal prior structural constraint beyond this moment. (See the differential entropy article for a derivation.)

Discrete distributions with specified mean

Among all the discrete distributions supported on the set {x1,...,xn} with a specified mean μ, the maximum entropy distribution has the following shape:

\operatorname{Pr}(X=x_k) = Cr^{x_k} \quad\mbox{ for } k=1,\ldots, n

where the positive constants C and r can be determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ.

For example, if a large number N of dice are thrown, and you are told that the sum of all the shown numbers is S. Based on this information alone, what would be a reasonable assumption for the number of dice showing 1, 2, ..., 6? This is an instance of the situation considered above, with {x1,...,x6} = {1,...,6} and μ = S/N.

Finally, among all the discrete distributions supported on the infinite set {x1,x2,...} with mean μ, the maximum entropy distribution has the shape:

\operatorname{Pr}(X=x_k) = Cr^{x_k} \quad\mbox{ for } k=1,2,\ldots ,

where again the constants C and r were determined by the requirements that the sum of all the probabilities must be 1 and the expected value must be μ. For example, in the case that xk = k, this gives

C = \frac{1}{\mu - 1} , \quad\quad r = \frac{\mu - 1}{\mu} ,

such that respective maximum entropy distribution is the geometric distribution.

Circular random variables

For a continuous random variable \theta_i distributed about the unit circle, the Von Mises distribution maximizes the entropy when the real and imaginary parts of the first circular moment are specified[6] or, equivalently, the circular mean and circular variance are specified.

When the mean and variance of the angles \theta_i modulo 2\pi are specified, the wrapped normal distribution maximizes the entropy.[6]

Maximizer for specified mean, variance and skew

There exists an upper bound on the entropy of continuous random variables on \mathbb R with a specified mean, variance, and skew. However, there is no distribution which achieves this upper bound because p(x) = c\exp{(\lambda_1x+\lambda_2x^2+\lambda_3x^3)} is unbounded except when \lambda_3=0 (see Cover, chapter 12). Thus, we cannot construct a maximum entropy distribution given these constraints.[clarification needed (explanation)]

However, the maximum entropy is \epsilon-achievable.[clarification needed (explanation)] Start with a normal distribution of the specified mean and variance. To introduce a positive skew, perturb the normal distribution upward by a small amount at a value many \sigma larger than the mean. The skewness, being proportional to the third moment, will be affected more than the lower order moments.

Other examples

In the table below, each listed distribution maximizes the entropy for a particular set of functional constraints listed in the third column, and the constraint that x be included in the support of the probability density, which is listed in the fourth column.[4] [5] Several examples (Bernoulli, geometric, exponential, Laplace, Pareto) listed are trivially true because their associated constraints are equivalent to the assignment of their entropy. They are included anyway because their constraint is related to a common or easily measured quantity. For reference, \Gamma(x) = \int_0^{\infty} e^{-t} t^{x-1} dt is the gamma function, \psi(x) = \frac{d}{dx} \ln\Gamma(x)=\frac{\Gamma'(x)}{\Gamma(x)} is the digamma function, B(p,q) = \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)} is the beta function, and γE is Euler's constant.

Table of probability distributions and corresponding maximum entropy constraints
Distribution Name Probability density/mass function Maximum Entropy Constraint Support
Uniform (discrete) f(k) = \frac{1}{b-a+1} None \{a,a+1,...,b-1,b\}\,
Uniform (continuous) f(x) = \frac{1}{b-a} None [a,b]\,
Bernoulli f(k) = p^k(1-p)^{1-k} E(k)=p\, \{0,1\}\,
Geometric f(k) = (1-p)^{k-1}\,p E(k)=\frac{1}{p}\, \{1,2,3,...\}\,
Exponential f(x) = \lambda \exp\left(-\lambda x\right) E(x)=\frac{1}{\lambda}\, [0,\infty)\,
Laplace f(x) = \frac{1}{2b} \exp\left(-\frac{|x - \mu|}{b}\right) E(|x-\mu|)=b\, (-\infty,\infty)\,
Asymmetric Laplace f(x)=\frac{\lambda \, e^{-(x-m)\lambda s \kappa^s}}{\kappa+1/\kappa}\,(s\!=\!\sgn(x\!-\!m)) E((x-m)s\kappa^s)=1/\lambda\, (-\infty,\infty)\,
Pareto f(x) = \frac{\alpha x_m^\alpha}{x^{\alpha+1}} E(\ln(x))=\frac{1}{\alpha}+\ln(x_m)\, [x_m,\infty)\,
Normal f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right) E(x)=\mu,\,E((x-\mu)^2)=\sigma^2 (-\infty,\infty)\,
von Mises f(\theta) = \frac{1}{2\pi I_0(\kappa)} \exp{(\kappa \cos{(\theta-\mu)})} E(\cos\theta)=\frac{I_1(\kappa)}{I_0(\kappa)}\cos\mu,\,E(\sin\theta)=\frac{I_1(\kappa)}{I_0(\kappa)}\sin\mu [0,2\pi)\,
Rayleigh f(x) = \frac{x}{\sigma^2} \exp\left(-\frac{x^2}{2\sigma^2}\right) E(x^2)=2\sigma^2, E(\ln(x))=\frac{\ln(2\sigma^2)-\gamma_E}{2}\, [0,\infty)\,
Beta f(x) = \frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)} for 0 \leq x \leq 1 E(\ln(x))=\psi(\alpha)-\psi(\alpha+\beta)\,
E(\ln(1-x))=\psi(\beta )-\psi(\alpha+\beta)\,
[0,1]\,
Cauchy f(x) = \frac{1}{\pi(1+x^2)} E(\ln(1+x^2))=2\ln 2 (-\infty,\infty)\,
Chi f(x) = \frac{2}{2^{k/2}  \Gamma(k/2)} x^{k-1} \exp\left(-\frac{x^2}{2}\right) E(x^2)=k,\,E(\ln(x))=\frac{1}{2}\left[\psi\left(\frac{k}{2}\right)\!+\!\ln(2)\right] [0,\infty)\,
Chi-squared f(x) = \frac{1}{2^{k/2} \Gamma(k/2)} x^{\frac{k}{2}\!-\!1} \exp\left(-\frac{x}{2}\right) E(x)=k,\,E(\ln(x))=\psi\left(\frac{k}{2}\right)+\ln(2) [0,\infty)\,
Erlang f(x) = \frac{\lambda^k}{(k-1)!} x^{k-1} \exp(-\lambda x) E(x)=k/\lambda,\,E(\ln(x))=\psi(k)-\ln(\lambda) [0,\infty)\,
Gamma f(x) = \frac{x^{k - 1} \exp(-\frac{x}{\theta})}{\theta^k \Gamma(k)} E(x)=k\theta,\,E(\ln(x))=\psi(k)+\ln(\theta) [0,\infty)\,
Lognormal f(x) = \frac{1}{\sigma x \sqrt{2\pi}} \exp\left(-\frac{(\ln x - \mu)^2}{2\sigma^2}\right) E(\ln(x))=\mu,E((\ln(x) - \mu)^2)=\sigma^2\, [0,\infty)\,
Maxwell–Boltzmann f(x) = \frac{1}{a^3}\sqrt{\frac{2}{\pi}}\,x^{2}\exp\left(-\frac{x^2}{2a^2}\right) E(x^2)=3a^2,\,E(\ln(x))\!=\!1\!+\!\ln\left(\frac{a}{\sqrt{2}}\right)\!-\!\frac{\gamma_E}{2} [0,\infty)\,
Weibull f(x) = \frac{k}{\lambda^k} x^{k-1} \exp\left(-\frac{x^k}{\lambda^k}\right) E(x^k)=\lambda^k,E(\ln(x))=\ln(\lambda)-\frac{\gamma_E}{k}\, [0,\infty)\,
Multivariate normal 
f_X(\vec{x}) =
 \frac{\exp \left( -\frac{1}{2} ( \vec{x} - \vec{\mu})^\top \Sigma^{-1}\cdot(\vec{x} - \vec{\mu}) \right)} {(2\pi)^{N/2} \left|\Sigma\right|^{1/2}}
E(\vec{x})=\vec{\mu},\,E((\vec{x}-\vec{\mu})(\vec{x}-\vec{\mu})^T)=\Sigma\, (-\vec{\infty},\vec{\infty})\,
Binomial f(k) = {n \choose k} p^k (1-p)^{n-k} E(x)= \mu, f \in \text{n-generalized binomial distribution}[7] [0, n]
Poisson f(k) = \frac{\exp^{-\lambda} \lambda^k}{k!} E(x)= \mu, f \in {\infty}\text{-generalized binomial distribution}[7] [0, n]

See also

Notes

  1. Williams, D. (2001) Weighing the Odds Cambridge UP ISBN 0-521-00618-X (pages 197-199)
  2. Bernardo, J.M., Smith, A.F.M. (2000) Bayesian Theory'.' Wiley. ISBN 0-471-49464-X (pages 209, 366)
  3. O'Hagan, A. (1994) Kendall's Advanced Theory of statistics, Vol 2B, Bayesian Inference, Edward Arnold. ISBN 0-340-52922-9 (Section 5.40)
  4. 4.0 4.1 Lua error in package.lua at line 80: module 'strict' not found.
  5. 5.0 5.1 Lua error in package.lua at line 80: module 'strict' not found.
  6. 6.0 6.1 Lua error in package.lua at line 80: module 'strict' not found.
  7. 7.0 7.1 Lua error in package.lua at line 80: module 'strict' not found.

Lua error in package.lua at line 80: module 'strict' not found.

References