Bayesian information criterion

From Infogalactic: the planetary knowledge core
(Redirected from Schwarz criterion)
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. In statistics, the Bayesian information criterion (BIC) or Schwarz criterion (also SBC, SBIC) is a criterion for model selection among a finite set of models; the model with the lowest BIC is preferred. It is based, in part, on the likelihood function and it is closely related to the Akaike information criterion (AIC).

When fitting models, it is possible to increase the likelihood by adding parameters, but doing so may result in overfitting. Both BIC and AIC resolve this problem by introducing a penalty term for the number of parameters in the model; the penalty term is larger in BIC than in AIC.

The BIC was developed by Gideon E. Schwarz and published in a 1978 paper,[1] where he gave a Bayesian argument for adopting it.

Definition

The BIC is formally defined as[2]

 \mathrm{BIC} = {-2 \cdot \ln{\hat L} + k \cdot \ln(n)}. \

where

  • x = the observed data;
  • \theta = the parameters of the model;
  • n = the number of data points in x, the number of observations, or equivalently, the sample size;
  • k = the number of free parameters to be estimated. If the model under consideration is a linear regression, k is the number of regressors, including the intercept;
  • \hat L = the maximized value of the likelihood function of the model M, i.e. \hat L=p(x|\hat\theta,M), where \hat\theta are the parameter values that maximize the likelihood function.

The BIC is an asymptotic result derived under the assumptions that the data distribution is in the exponential family. That is, the integral of the likelihood function p(x|\theta,M) times the prior probability distribution p(\theta|M) over the parameters \theta of the model M for fixed observed data x is approximated as

{-2 \cdot \ln{p(x|M)}} \approx \mathrm{BIC} = {-2 \cdot \ln{\hat L} + k \cdot (\ln(n) - \ln(2 \pi))}. \

For large n, this can be approximated by the formula given above. The BIC is used in model selection problems where adding a constant to the BIC does not change the result.

Limitations of BIC

The BIC criterion suffers from two main limitations[3]

  1. the above approximation is only valid for sample size n much larger than the number k of parameters in the model.
  2. the BIC cannot handle complex collections of models as in the variable selection (or feature selection) problem in high-dimension.[3]

Gaussian Case

Under the assumption that the model errors or disturbances are independent and identically distributed according to a normal distribution and that the boundary condition that the derivative of the log likelihood with respect to the true variance is zero, this becomes (up to an additive constant, which depends only on n and not on the model):[4]

 \mathrm{BIC} = n \cdot \ln(\widehat{\sigma_e^2}) + k \cdot \ln(n) \

where \widehat{\sigma_e^2} is the error variance. The error variance in this case is defined as

\widehat{\sigma_e^2} = \frac{1}{n} \sum_{i=1}^n (x_i-\hat{x_i})^2.

which is a biased estimator for the true variance. In terms of the residual sum of squares (RSS) the BIC is

\mathrm{BIC} = n \cdot \ln(RSS/n) + k \cdot \ln(n) \

When testing multiple linear models against a saturated model, the BIC can be rewritten in terms of the deviance \chi^2 as:[5]

 \mathrm{BIC}= \chi^2 + df \cdot \ln(n)

where df is the number of degrees of freedom in the test.

When picking from several models, the one with the lowest BIC is preferred. The BIC is an increasing function of the error variance \sigma_e^2 and an increasing function of k. That is, unexplained variation in the dependent variable and the number of explanatory variables increase the value of BIC. Hence, lower BIC implies either fewer explanatory variables, better fit, or both. The strength of the evidence against the model with the higher BIC value can be summarized as follows:[5]

ΔBIC Evidence against higher BIC
0 to 2 Not worth more than a bare mention
2 to 6 Positive
6 to 10 Strong
>10 Very Strong

The BIC generally penalizes free parameters more strongly than the Akaike information criterion, though it depends on the size of n and relative magnitude of n and k.

It is important to keep in mind that the BIC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. The models being compared need not be nested, unlike the case when models are being compared using an F-test or a likelihood ratio test.

Characteristics of the Bayesian information criterion

  1. It is independent of the prior or the prior is "vague" (a constant).
  2. It can measure the efficiency of the parameterized model in terms of predicting the data.
  3. It penalizes the complexity of the model where complexity refers to the number of parameters in the model.
  4. It is approximately equal to the minimum description length criterion but with negative sign.
  5. It can be used to choose the number of clusters according to the intrinsic complexity present in a particular dataset.
  6. It is closely related to other penalized likelihood criteria such as RIC and the Akaike information criterion.

See also

Notes

  1. Lua error in package.lua at line 80: module 'strict' not found..
  2. Lua error in package.lua at line 80: module 'strict' not found.
  3. 3.0 3.1 Giraud, C. (2015), Introdution to high-dimensional statistics, Chapman & Hall/CRC, ISBN 9781482237948
  4. Priestley, M.B. (1981), Spectral Analysis and Time Series, Academic Press. ISBN 0-12-564922-3 (p. 375).
  5. 5.0 5.1 Lua error in package.lua at line 80: module 'strict' not found..

References

External links