Efficient estimator

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in Module:Broader at line 30: attempt to call field '_formatLink' (a nil value). In statistics, an efficient estimator is an estimator that estimates the quantity of interest in some “best possible” manner. The notion of “best possible” relies upon the choice of a particular loss function — the function which quantifies the relative degree of undesirability of estimation errors of different magnitudes. The most common choice of the loss function is quadratic, resulting in the mean squared error criterion of optimality.[1]

Finite-sample efficiency

Suppose { Pθ | θ ∈ Θ } is a parametric model and X = (X1, …, Xn) are the data sampled from this model. Let T = T(X) be an estimator for the parameter θ. If this estimator is unbiased (that is, E[ T ] = θ), then the Cramér–Rao inequality states the variance of this estimator is bounded from below:


    \operatorname{Var}[\,T\,]\ \geq\ \mathcal{I}_\theta^{-1},

where \scriptstyle\mathcal{I}_\theta is the Fisher information matrix of the model at point θ. Generally, the variance measures the degree of dispersion of a random variable around its mean. Thus estimators with small variances are more concentrated, they estimate the parameters more precisely. We say that the estimator is finite-sample efficient estimator (in the class of unbiased estimators) if it reaches the lower bound in the Cramér–Rao inequality above, for all θ ∈ Θ. Efficient estimators are always minimum variance unbiased estimators. However the converse is false: There exist point-estimation problems for which the minimum-variance mean-unbiased estimator is inefficient.[2]

Historically, finite-sample efficiency was an early optimality criterion. However this criterion has some limitations:

  • Finite-sample efficient estimators are extremely rare. In fact, it was proved that efficient estimation is possible only in an exponential family, and only for the natural parameters of that family.[citation needed]
  • This notion of efficiency is restricted to the class of unbiased estimators. Since there are no good theoretical reasons to require that estimators are unbiased, this restriction is inconvenient. In fact, if we use mean squared error as a selection criterion, many biased estimators will slightly outperform the “best” unbiased ones. For example, in multivariate statistics for dimension three or more, the mean-unbiased estimator, sample mean, is inadmissible: Regardless of the outcome, its performance is worse than for example the James–Stein estimator.[citation needed]
  • Finite-sample efficiency is based on the variance, as a criterion according to which the estimators are judged. A more general approach is to use loss functions other than quadratic ones, in which case the finite-sample efficiency can no longer be formulated.[citation needed][dubious ]

Example

Among the models encountered in practice, efficient estimators exist for: the mean μ of the normal distribution (but not the variance σ2), parameter λ of the Poisson distribution, the probability p in the binomial or multinomial distribution.

Consider the model of a normal distribution with unknown mean but known variance: { Pθ = N(θ, σ2) | θR }. The data consists of n iid observations from this model: X = (x1, …, xn). We estimate the parameter θ using the sample mean of all observations:


    T(X) = \frac1n \sum_{i=1}^n x_i\ .

This estimator has mean θ and variance of σ2 / n, which is equal to the reciprocal of the Fisher information from the sample. Thus, the sample mean is a finite-sample efficient estimator for the mean of the normal distribution.

Relative efficiency

If T_1 and T_2 are estimators for the parameter \theta, then T_1 is said to dominate T_2 if:

  1. its mean squared error (MSE) is smaller for at least some value of \theta
  2. the MSE does not exceed that of T_2 for any value of θ.

Formally, T_1 dominates T_2 if


\mathrm{E}
\left[
 (T_1 - \theta)^2
\right]
\leq
\mathrm{E}
\left[
 (T_2-\theta)^2
\right]

holds for all \theta, with strict inequality holding somewhere.

The relative efficiency is defined as


e(T_1,T_2)
=
\frac
 {\mathrm{E} \left[ (T_2-\theta)^2 \right]}
 {\mathrm{E} \left[ (T_1-\theta)^2 \right]}

Although e is in general a function of \theta, in many cases the dependence drops out; if this is so, e being greater than one would indicate that T_1 is preferable, whatever the true value of \theta.

Asymptotic efficiency

For some estimators, they can attain efficiency asymptotically and are thus called asymptotically efficient estimators. This can be the case for some maximum likelihood estimators or for any estimators that attain equality of the Cramér-Rao bound asymptotically.

See also

Notes

References

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.


Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.