Markov's inequality

From Infogalactic: the planetary knowledge core
Jump to: navigation, search
Markov's inequality gives an upper bound for the measure of the set (indicated in red) where f(x) exceeds a given level \varepsilon. The bound combines the level \varepsilon with the average value of f.

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named after the Russian mathematician Andrey Markov, although it appeared earlier in the work of Pafnuty Chebyshev (Markov's teacher), and many sources, especially in analysis, refer to it as Chebyshev's inequality (sometimes, calling it the first Chebyshev inequality, while referring to the Chebyshev's inequality as the second Chebyshev's inequality) or Bienaymé's inequality.

Markov's inequality (and other similar inequalities) relate probabilities to expectations, and provide (frequently loose but still useful) bounds for the cumulative distribution function of a random variable.

An example of an application of Markov's inequality is the fact that (assuming incomes are non-negative) no more than 1/5 of the population can have more than 5 times the average income.

Statement

If X is any nonnegative integrable random variable and a > E(X), then

\mathbb{P}(X \geq a) \leq \frac{\mathbb{E}(X)}{a}.

In the language of measure theory, Markov's inequality states that if (X, Σ, μ) is a measure space, f is a measurable extended real-valued function, and ε > 0, then

 \mu(\{x\in X:|f(x)|\geq \varepsilon \}) \leq {1\over \varepsilon}\int_X |f|\,d\mu.

(This measure theoretic definition may sometimes be referred to as Chebyshev's inequality .[1])

Extended version for monotonically increasing functions

If φ is a monotonically increasing function from the nonnegative reals to the nonnegative reals, X is a random variable, a ≥ 0, and φ(a) > 0, then

\mathbb P (|X| \ge a) \le \frac{\mathbb E(\varphi(|X|))}{\varphi(a)}

Proofs

We separate the case in which the measure space is a probability space from the more general case because the probability case is more accessible for the general reader.

Proof In the language of probability theory

For any event E, let IE be the indicator random variable of E, that is, IE = 1 if E occurs and IE = 0 otherwise.

Using this notation, we have I(X ≥ a) = 1 if the event X ≥ a occurs, and I(X ≥ a) = 0 if X < a. Then, given a > 0,

aI_{(X \geq a)} \leq X\,

which is clear if we consider the two possible values of I(X ≥ a). If X < a, then I(X ≥ a) = 0, and so aI(X ≥ a) = 0 ≤ X. Otherwise, we have X ≥ a, for which I(X ≥ a) = 1 and so aI(X ≥ a) = a ≤ X.

Since \mathbb{E} is a monotonically increasing function, taking expectation of both sides of an inequality cannot reverse it. Therefore

\mathbb{E}(aI_{(X \geq a)}) \leq \mathbb{E}(X).\,

Now, using linearity of expectations, the left side of this inequality is the same as

a\mathbb{E}(I_{(X \geq a)}) = a(1\cdot\mathbb{P}(X \geq a) + 0\cdot\mathbb{P}(X < a)) = a\mathbb{P}(X \geq a).\,

Thus we have

a\mathbb{P}(X \geq a) \leq \mathbb{E}(X)\,

and since a > 0, we can divide both sides by a.

In the language of measure theory

We may assume that the function f is non-negative, since only its absolute value enters in the equation. Now, consider the real-valued function s on X given by


s(x) =
\begin{cases}
  \varepsilon, & \text{if } f(x) \geq \varepsilon  \\
  0, & \text{if } f(x) < \varepsilon
\end{cases}

Then 0\leq s(x)\leq f(x). By the definition of the Lebesgue integral


\int_X f(x) \, d\mu \geq \int_X s(x) \, d \mu = \varepsilon \mu( \{ x\in X : \, f(x) \geq \varepsilon \} )

and since \varepsilon >0 , both sides can be divided by \varepsilon, obtaining

\mu(\{x\in X : \, f(x) \geq \varepsilon \}) \leq {1\over \varepsilon }\int_X f \,d\mu.

Q.E.D.

Corollaries

Chebyshev's inequality

Chebyshev's inequality uses the variance to bound the probability that a random variable deviates far from the mean. Specifically:

\mathbb{P}(|X-\mathbb{E}(X)| \geq a) \leq \frac{\mathrm{Var}(X)}{a^2},

for any a>0. Here Var(X) is the variance of X, defined as:

 \operatorname{Var}(X) = \mathbb{E}[(X - \mathbb{E}(X) )^2].

Chebyshev's inequality follows from Markov's inequality by considering the random variable

 (X - \mathbb{E}(X))^2

and the constant

a^2

for which Markov's inequality reads

 \mathbb{P}( (X - \mathbb{E}(X))^2 \ge a^2) \le \frac{\operatorname{Var}(X)}{a^2},

This argument can be summarized (where "MI" indicates use of Markov's inequality):

\mathbb{P}(|X-\mathbb{E}(X)| \geq a) = 
\mathbb{P}\left((X-\mathbb{E}(X))^2 \geq a^2\right)  \overset{\underset{\mathrm{MI}}{}}{\leq} 
\frac {\mathbb {E} \left( {(X-\mathbb{E}(X))}^2 \right)}{a^2} =
 \frac{\operatorname{Var}(X)}{a^2}

Other corollaries

  1. The "monotonic" result can be demonstrated by:
    \mathbb P (|X| \ge a) \le \mathbb P (\varphi(|X|) \ge \varphi(a)) \overset{\underset{\mathrm{MI}}{}}{\leq} = \frac{\mathbb E(\varphi(|X|))}{\varphi(a)}
  2. The result that, for a nonnegative random variable X, the quantile function of X satisfies:
    Q_X(1-p) \leq \frac {\mathbb E(X)}{p},
    the proof using
    p \leq \mathbb P(X \geq Q_X(1-p)) \overset{\underset{\mathrm{MI}}{}}{\leq} \frac {\mathbb E(X)}{Q_X(1-p)}.
  3. Let  M \succeq 0 be a self-adjoint matrix-valued random variable and a > 0. Then
    
\mathbb{P}(M \npreceq a \cdot I) \leq \frac{\mathrm{tr}\left( E(M) \right)}{n a}.
    can be shown in a similar manner.

See also

References

  1. E.M. Stein, R. Shakarchi, "Real Analysis, Measure Theory, Integration, & Hilbert Spaces", vol. 3, 1st ed., 2005, p.91

External links