Projection pursuit regression

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

Lua error in package.lua at line 80: module 'strict' not found.

In statistics, projection pursuit regression (PPR) is a statistical model developed by Jerome H. Friedman and Werner Stuetzle which is an extension of additive models. This model adapts the additive models in that it first projects the data matrix of explanatory variables in the optimal direction before applying smoothing functions to these explanatory variables.

Model overview

The model consists of linear combinations of non-linear transformations of linear combinations of explanatory variables. The basic model takes the form

Y=\beta_0 + \sum_{j=1}^r f_j (\beta_j'x) + \varepsilon ,

where x is a column vector containing a particular row of the design matrix X which contains p explanatory variables (columns) and n observations (row). Here Y is a particular observation variable (identifying the row being considered) to be predicted, {βj} is a collection of r vectors (each a unit vector of length p) which contain the unknown parameters. Finally r is the number of modelled smoothed non-parametric functions to be used as constructed explanatory variables. The value of r is found through cross-validation or a forward stage-wise strategy which stops when the model fit cannot be significantly improved. For large values of r and an appropriate set of functions fj, the PPR model is considered a universal estimator as it can estimate any continuous function in Rp.

Thus this model takes the form of the basic additive model but with the additional βj component; making it fit \beta_j 'x rather than the actual inputs x. The vector \beta_j 'X is the projection of X onto the unit vector βj, where the directions βj are chosen to optimize model fit. The functions fj are unspecified by the model and estimated using some flexible smoothing method; preferably one with well defined second derivatives to simplify computation. This allows the PPR to be very general as it fits non-linear functions fj of any class of linear combinations in X. Due to the flexibility and generality of this model, it is difficult to interpret the fitted model because each input variable has been entered into the model in a complex and multifaceted way. Thus the model is far more useful for prediction than creating a model to understand the data.

Model estimation

For a given set of data (y_i ,x_i ), the goal is to minimize the error function

S=\sum_{i=1}^n \left[ y_i - \sum_{j=1}^r f_j (\beta_j 'x_i) \right]^2 ,

over the functions f_j and vectors \beta_j. After estimating the smoothing functions f_j, one generally uses the Gauss–Newton iterated convergence technique to solve for \beta_j; provided that the functions f_j are twice differentiable.

It has been shown that the convergence rate, the bias and the variance are affected by the estimation of \beta_j and f_j. It has also been shown that \beta_j converges at an order of n^\frac{1}{2}, while \beta_j converges at a slightly worse order.

Advantages of PPR estimation

  • It uses univariate regression functions instead of their multivariate form, thus effectively dealing with the curse of dimensionality
  • Univariate regression allows for simple and efficient estimation
  • Relative to generalized additive models, PPR can estimate a much richer class of functions
  • Unlike local averaging methods (such as k-nearest neighbors), PPR can ignore variables with low explanatory power.

Disadvantages of PPR estimation

  • PPR requires examining an M-dimensional parameter space in order to estimate \beta_j.
  • One must select the smoothing parameter for f_j.
  • The model is often difficult to interpret

Extensions of PPR

  • Alternate smoothers, such as the radial function, harmonic function and additive function, have been suggested and their performances vary depending on the data sets used.
  • Alternate optimization criteria have been used as well, such as standard absolute deviations and mean absolute deviations.
  • Ordinary least squares can be used to simplify calculations as often the data does not have strong non-linearities.
  • Sliced Inverse Regression (SIR) has been used to choose the direction vectors for PPR.
  • Generalized PPR combines regular PPR with iteratively reweighted least squares (IRLS) and a link function to estimate binary data.

PPR vs neural networks (NN)

Both projection pursuit regression and neural networks models project the input vector onto a one-dimensional hyperplane and then apply a nonlinear transformation of the input variables that are then added in a linear fashion. Thus both follow the same steps to overcome the curse of dimensionality. The main difference is that the functions f_j being fitted in PPR can be different for each combination of input variables and are estimated one at a time and then updated with the weights, whereas is NN these are all specified upfront and estimated simultaneously.

Thus, PPR estimation is more straightforward than NN and the transformations of variables in PPR is data driven whereas in NN, these transformations are fixed.

See also

References

  • Friedman, J.H. and Stuetzle, W. (1981) Projection Pursuit Regression. Journal of the American Statistical Association, 76, 817–823.
  • Hand, D., Mannila, H. and Smyth, P, (2001) Principles of Data Mining. MIT Press. ISBN 0-262-08290-X
  • Hall, P. (1988) Estimating the direction in which a data set is the most interesting, Probab. Theory Related Fields, 80, 51–77.
  • Hastie, T. J., Tibshirani, R. J. and Friedman, J.H. (2009). The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer. ISBN 978-0-387-84857-0
  • Klinke, S. and Grassmann, J. (2000) ‘Projection Pursuit Regression’ in Smoothing and Regression: Approaches, Computation and Application. Ed. Schimek, M.G.. Wiley Interscience.
  • Lingjarde, O. C. and Liestol, K. (1998) Generalized Projection Pursuit Regression. SIAM Journal of Scientific Computing, 20, 844-857.