Probit model

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found. In statistics, a probit model is a type of regression where the dependent variable can only take two values, for example married or not married. The name is from probability + unit.[1] The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, if estimated probabilities greater than 1/2 are treated as classifying an observation into a predicted category, the probit model is a type of binary classification model.

A probit model is a popular specification for an ordinal[2] or a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. The probit model, which employs a probit link function, is most often estimated using the standard maximum likelihood procedure, such an estimation being called a probit regression.

Probit models were introduced by Chester Bliss in 1934;[3] a fast method for computing maximum likelihood estimates for them was proposed by Ronald Fisher as an appendix to Bliss' work in 1935.[4]

Conceptual framework

Suppose response variable Y is binary, that is it can have only two possible outcomes which we will denote as 1 and 0. For example Y may represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector of regressors X, which are assumed to influence the outcome Y. Specifically, we assume that the model takes the form


    \Pr(Y=1 \mid X) = \Phi(X^T\beta),

where Pr denotes probability, and Φ is the Cumulative Distribution Function (CDF) of the standard normal distribution. The parameters β are typically estimated by maximum likelihood.

It is possible to motivate the probit model as a latent variable model. Suppose there exists an auxiliary random variable

 Y^\ast = X^T\beta + \varepsilon, \,

where ε ~ N(0, 1). Then Y can be viewed as an indicator for whether this latent variable is positive:

 Y = \begin{cases} 1 & \text{if }Y^\ast > 0 \ \text{ i.e. } - \varepsilon < X^T\beta, \\
0 &\text{otherwise.} \end{cases}

The use of the standard normal distribution causes no loss of generality compared with using an arbitrary mean and standard deviation because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount.

To see that the two models are equivalent, note that


\begin{align}
\Pr(Y = 1 \mid X) &= \Pr(Y^\ast > 0) = \Pr(X^T\beta + \varepsilon > 0) \\
&= \Pr(\varepsilon > -X^T\beta) \\
&= \Pr(\varepsilon < X^T\beta) \quad \text{(by symmetry of the normal dist)}\\
&= \Phi(X^T\beta)
\end{align}

Model estimation

Maximum likelihood estimation

Suppose data set \{y_i,x_i\}_{i=1}^n contains n independent statistical units corresponding to the model above. Then their joint log-likelihood function is

 \ln\mathcal{L}(\beta) = \sum_{i=1}^n \bigg( y_i\ln\Phi(x_i'\beta) + (1-y_i)\ln\!\big(1-\Phi(x_i'\beta)\big) \bigg)

The estimator \hat\beta which maximizes this function will be consistent, asymptotically normal and efficient provided that E[XX'] exists and is not singular. It can be shown that this log-likelihood function is globally concave in β, and therefore standard numerical algorithms for optimization will converge rapidly to the unique maximum.

Asymptotic distribution for \hat\beta is given by

\sqrt{n}(\hat\beta - \beta)\ \xrightarrow{d}\ \mathcal{N}(0,\,\Omega^{-1}),

where

\Omega = \operatorname{E}\bigg[ \frac{\varphi^2(X'\beta)}{\Phi(X'\beta)(1-\Phi(X'\beta))}XX' \bigg], \qquad
  \hat\Omega = \frac{1}{n}\sum_{i=1}^n \frac{\varphi^2(x'_i\hat\beta)}{\Phi(x'_i\hat\beta)(1-\Phi(x'_i\hat\beta))}x_ix'_i

and φ = Φ' is the Probability Density Function (PDF) of standard normal distribution.

Berkson's minimum chi-square method

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

This method can be applied only when there are many observations of response variable y_i having the same value of the vector of regressors x_i (such situation may be referred to as "many observations per cell"). More specifically, the model can be formulated as follows.

Suppose among n observations \{y_i,x_i\}_{i=1}^n there are only T distinct values of the regressors, which can be denoted as \{x_{(1)},\ldots,x_{(T)}\}. Let n_t be the number of observations with x_i=x_{(t)}, and r_t the number of such observations with y_i=1. We assume that there are indeed "many" observations per each "cell": for each  t, \lim_{n \rightarrow \infty} n_t/n = c_t > 0 .

Denote

 \hat{p}_t = r_t/n_t
 \hat\sigma_t^2 = \frac{1}{n_t} \frac{\hat{p}_t(1-\hat{p}_t)}{\varphi^2\big(\Phi^{-1}(\hat{p}_t)\big)}

Then Berkson's minimum chi-square estimator is a generalized least squares estimator in a regression of \Phi^{-1}(\hat{p}_t) on x_{(t)} with weights \hat\sigma_t^{-2}:

 \hat\beta = \Bigg( \sum_{t=1}^T \hat\sigma_t^{-2}x_{(t)}x'_{(t)} \Bigg)^{-1} \sum_{t=1}^T \hat\sigma_t^{-2}x_{(t)}\Phi^{-1}(\hat{p}_t)

It can be shown that this estimator is consistent (as n→∞ and T fixed), asymptotically normal and efficient.[citation needed] Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts r_t, n_t, and x_{(t)} (for example in the analysis of voting behavior).

Gibbs sampling

Gibbs sampling of a probit model is possible because regression models typically use normal prior distributions over the weights, and this distribution is conjugate with the normal distribution of the errors (and hence of the latent variablesY*). The model can be described as



\begin{align}
\boldsymbol\beta & \sim \mathcal{N}(\mathbf{b}_0, \mathbf{B}_0) \\[3pt]
y_i^\ast\mid\mathbf{x}_i,\boldsymbol\beta & \sim \mathcal{N}(\mathbf{x}'_i\boldsymbol\beta, 1) \\[3pt]
 y_i  & = \begin{cases} 1 & \text{if } y_i^\ast > 0 \\ 0 & \text{otherwise} \end{cases}
\end{align}


From this, we can determine the full conditional densities needed:


\begin{align}
\mathbf{B} &= (\mathbf{B}_0^{-1} + \mathbf{X}'\mathbf{X})^{-1} \\[3pt]
\boldsymbol\beta\mid\mathbf{y}^\ast &\sim \mathcal{N}(\mathbf{B}(\mathbf{B}_0^{-1}\mathbf{b}_0 + \mathbf{X}'\mathbf{y}^\ast), \mathbf{B}) \\[3pt]
y_i^\ast\mid y_i=0,\mathbf{x}_i,\boldsymbol\beta &\sim \mathcal{N}(\mathbf{x}'_i\boldsymbol\beta, 1)[y_i^\ast < 0] \\[3pt]
y_i^\ast\mid y_i=1,\mathbf{x}_i,\boldsymbol\beta &\sim \mathcal{N}(\mathbf{x}'_i\boldsymbol\beta, 1)[y_i^\ast \ge 0]
\end{align}

The result for β is given in the article on Bayesian linear regression, although specified with different notation.

The only trickiness is in the last two equations. The notation [y_i^\ast < 0] is the Iverson bracket, sometimes written \mathcal{I}(y_i^\ast < 0) or similar. It indicates that the distribution must be truncated within the given range, and rescaled appropriately. In this particular case, a truncated normal distribution arises. Sampling from this distribution depends on how much is truncated. If a large fraction of the original mass remains, sampling can be easily done with rejection sampling — simply sample a number from the non-truncated distribution, and reject it if it falls outside the restriction imposed by the truncation. If sampling from only a small fraction of the original mass, however (e.g. if sampling from one of the tails of the normal distribution — for example if \mathbf{x}'_i\boldsymbol\beta is around 3 or more, and a negative sample is desired), then this will be inefficient and it becomes necessary to fall back on other sampling algorithms. General sampling from the truncated normal can be achieved using approximations to the normal CDF and the probit function, and R has a function rtnorm() for generating truncated-normal samples.

Model evaluation

The suitability of an estimated binary model can be evaluated by counting the number of true observations equaling 1, and the number equaling zero, for which the model assigns a correct predicted classification by treating any estimated probability above 1/2 (or, below 1/2), as an assignment of a prediction of 1 (or, of 0). See here for details.

See also

References

  1. Oxford English Dictionary, 3rd ed. s.v. probit (article dated June 2007): Lua error in package.lua at line 80: module 'strict' not found.
  2. Ordinal probit regression model UCLA Academic Technology Services http://www.ats.ucla.edu/stat/stata/dae/ologit.htm
  3. Lua error in package.lua at line 80: module 'strict' not found.
  4. Lua error in package.lua at line 80: module 'strict' not found.

Further reading

  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.
  • Lua error in package.lua at line 80: module 'strict' not found.

External links