Lasso (statistics)

From Infogalactic: the planetary knowledge core
Jump to: navigation, search

In statistics and machine learning, lasso (least absolute shrinkage and selection operator) (also Lasso or LASSO) is a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the statistical model it produces. It was introduced by Robert Tibshirani in 1996 based on Leo Breiman’s Nonnegative Garrote.[1][2] Lasso was originally formulated for least squares models and this simple case reveals a substantial amount about the behavior of the estimator, including its relationship to ridge regression and best subset selection and the connections between lasso coefficient estimates and so-called soft thresholding. It also reveals that (like standard linear regression) the coefficient estimates need not be unique if covariates are collinear.

Though originally defined for least squares, lasso regularization is easily extended to a wide variety of statistical models including generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators, in a straightforward fashion.[1][3] Lasso’s ability to perform subset selection relies on the form of the constraint and has a variety of interpretations including in terms of geometry, Bayesian statistics, and convex analysis.

Motivation

Robert Tibshirani introduced lasso in order to improve the prediction accuracy and interpretability of regression models by altering the model fitting process to select only a subset of the provided covariates for use in the final model rather than using all of them.[1] It is based on Breiman’s Nonnegative Garrote, which has similar goals, but works somewhat differently.[2]

Prior to lasso, the most widely used method for choosing which covariates to include was stepwise selection, which only improves prediction accuracy in certain cases, such as when only a few covariates have a strong relationship with the outcome. However, in other cases, it can make prediction error worse. Also, at the time, ridge regression was the most popular technique for improving prediction accuracy. Ridge regression improves prediction error by shrinking large regression coefficients in order to reduce overfitting, but it does not perform covariate selection and therefore does not help to make the model more interpretable.

Lasso is able to achieve both of these goals by forcing the sum of the absolute value of the regression coefficients to be less than a fixed value, which forces certain coefficients to be set to zero, effectively choosing a simpler model that does not include those coefficients. This idea is similar to ridge regression, in which the sum of the squares of the coefficients is forced to be less than a fixed value, though in the case of ridge regression, this only shrinks the size of the coefficients, it does not set any of them to zero.

Basic form

Lasso was originally introduced in the context of least squares, and it can be instructive to consider this case first, since it illustrates many of lasso’s properties in a straightforward setting.

Consider a sample consisting of n cases, each of which consists of p covariates and a single outcome. Let  y_i be the outcome and  x_i:=(x_1,x_2,\ldots,x_p)^T be the covariate vector for the ith case. Then the objective of lasso is to solve

 \min_{ \beta_0, \beta } \left\{ \frac{1}{N} \sum_{i=1}^N (y_i - \beta_0 - x_i^T \beta)^2 \right\} \text{ subject to } \sum_{j=1}^p |\beta_j| \leq t. [1]

Here t is a prespecified free parameter that determines the amount of regularisation. Letting  X be the covariate matrix, so that  X_{ij} = (x_i)_j and x_i^T is the ith row of X, we can write this more compactly as

 \min_{ \beta_0, \beta } \left\{ \frac{1}{N} \left\| y - \beta_0 - X \beta \right\|_2^2 \right\} \text{ subject to } \| \beta \|_1 \leq t.

where  \| Z \|_p = \left( \sum_{i=1}^N | Z_i |^p \right)^{1/p} is the standard  \ell^p norm.

Since  \hat{\beta}_0 = \bar{y} - \bar{x}^T \beta , so that

 y_i - \hat{\beta}_0 - x_i^T \beta = y_i - ( \bar{y} - \bar{x}^T \beta ) - x_i^T \beta = ( y_i - \bar{y} ) - ( x_i - \bar{x} )^T \beta,

it is standard to work with centered variables. Additionally, the covariates are typically standardized  \textstyle \left( \sum_{i=1}^N x_{ij}^2 = 1 \right) so that the solution does not depend on the measurement scale.

It can be helpful to rewrite

 \min_{ \beta \in \mathbb{R}^p } \left\{ \frac{1}{N} \left\| y - X \beta \right\|_2^2 \right\} \text{ subject to } \| \beta \|_1 \leq t.

in the so-called Lagrangian form

 \min_{ \beta \in \mathbb{R}^p } \left\{ \frac{1}{N} \left\| y - X \beta \right\|_2^2 + \lambda \| \beta \|_1 \right\}

where the exact relationship between  t and  \lambda is data dependent.

Orthonormal covariates

We can now examine some basic properties of the lasso estimator.

We first assume that the covariates are orthonormal so that  ( x_i \mid x_j ) = \delta_{ij} , where  ( \cdot \mid \cdot ) is the inner product and  \delta_{ij} is the Kronecker delta, or, equivalently,  X^T X = I . Then, using subgradient methods, we can show that

 \hat{\beta}_j = S_{N \lambda}( \hat{\beta}^\text{OLS}_j ) = \hat{\beta}^\text{OLS}_j \max \left( 0, 1 - \frac{ N \lambda }{ |\hat{\beta}^\text{OLS}_j| } \right) \text{ where } \hat{\beta}^\text{OLS} = (X^T X)^{-1} X^T y = X^T y [1]

 S_\alpha is referred to as the soft thresholding operator, since it translates values towards zero (making them exactly zero if they are small enough) instead of setting smaller values to zero and leaving larger ones untouched as the hard thresholding operator, often denoted  H_\alpha , would.

We can compare this to ridge regression, where the objective is to minimize

 \min_{ \beta \in \mathbb{R}^p } \left\{ \frac{1}{N} \| y - X \beta \|_2^2 + \lambda \| \beta \|_2^2 \right\}

which yields

 \hat{\beta}_j = ( 1 + N \lambda )^{-1} \hat{\beta}^\text{OLS}_j.

So ridge regression shrinks all coefficients by a uniform factor of  (1 + N \lambda)^{-1} and does not set any coefficients to zero.

We can also compare this to regression with best subset selection, in which the goal is to minimize

 \min_{ \beta \in \mathbb{R}^p } \left\{ \frac{1}{N} \left\| y - X \beta \right\|_2^2 + \lambda \| \beta \|_0 \right\}

where  \| \cdot \|_0 is the " \ell^0 norm", which is defined as  \| z \| = m if exactly m components of z are nonzero. In this case, it can be shown that

 \hat{\beta}_j = H_{ \sqrt{ N \lambda } } \left( \hat{\beta}^\text{OLS}_j \right) = \hat{\beta}^\text{OLS}_j \mathrm{I} \left( \left| \hat{\beta}^\text{OLS}_j \right| \geq \sqrt{ N \lambda } \right)

where  H_\alpha is the so-called hard thresholding function and  \mathrm{I} is an indicator function (it is 1 if its argument is true and 0 otherwise).

Therefore, the lasso estimates share features of the estimates from both ridge and best subset selection regression since they both shrink the magnitude of all the coefficients, like ridge regression, but also set some of them to zero, as in the best subset selection case. Additionally, while ridge regression scales all of the coefficients by a constant factor, lasso instead translates the coefficients towards zero by a constant value and sets them to zero if they reach it.

Correlated covariates

Returning to the general case, in which the different covariates may not be independent, we consider the special case in which two of the covariates, say j and k, are identical for each case, so that  x_{(j)} = x_{(k)} , where  x_{(j),i} = x_{ij} . Then the values of  \beta_j and  \beta_k that minimize the lasso objective function are not uniquely determined. In fact, if there is some solution  \hat{\beta} in which  \hat{\beta}_j \hat{\beta}_k \geq 0 , then if  s \in [0,1] replacing  \hat{\beta}_j by  s ( \hat{\beta}_j + \hat{\beta}_k ) and  \hat{\beta}_k by  (1 - s ) ( \hat{\beta}_j + \hat{\beta}_k ) , while keeping all the other  \hat{\beta}_i fixed, gives a new solution, so the lasso objective function then has a continuum of valid minimizers.[4] Several variants of the lasso, including the Elastic Net, have been designed to address this shortcoming, which are discussed below.

General form

Lasso regularization can be extended to a wide variety of objective functions such as those for generalized linear models, generalized estimating equations, proportional hazards models, and M-estimators in general, in the obvious way.[1][3] Given the objective function

 \frac{1}{N} \sum_{i=1}^N f( x_i, y_i, \alpha, \beta )

the lasso regularized version of the estimator will be the solution to

 \min_{ \alpha, \beta } \frac{1}{N} \sum_{i=1}^N f( x_i, y_i, \alpha, \beta ) \text{ subject to } \| \beta \|_1 \leq t

where only  \beta is penalized while  \alpha is free to take any allowed value, just as  \beta_0 was not penalized in the basic case.

Interpretations of lasso

Geometric interpretation

Forms of the constraint regions for lasso and ridge regression.

As discussed above, lasso can set coefficients to zero, while ridge regression, which appears superficially similar, cannot. This is due to the difference in the shape of the constraint boundaries in the two cases. Both lasso and ridge regression can be interpreted as minimizing the same objective function

 \min_{ \beta_0, \beta } \left\{ \frac{1}{N} \left\| y - \beta_0 - X \beta \right\|_2^2 \right\}

but with respect to different constraints:  \| \beta \|_1 \leq t for lasso and  \| \beta \|_2^2 \leq t for ridge. From the figure, one can see that the constraint region defined by the  \ell^1 norm is a square (in general a hypercube) rotated so that its corners lie on the axes, while the region defined by the  \ell^2 norm is a circle (in general an n-sphere), which is rotationally invariant and, therefore, has no corners. As seen in the figure, a convex object that lies tangent to the boundary, such as the line shown, is likely to encounter a corner (or in higher dimensions an edge or higher-dimensional equivalent) of a hypercube, for which some components of  \beta are identically zero, while in the case of an n-sphere, the points on the boundary for which some of the components of  \beta are zero are not distinguished from the others and the convex object is no more likely to contact a point at which some components of  \beta are zero than one for which none of them are.

Bayesian interpretation

Laplace distributions are sharply peaked at their mean with more probability density concentrated there compared to a normal distribution.

Just as ridge regression can be interpreted as linear regression for which the coefficients have been assigned normal prior distributions, lasso can be interpreted as linear regression for which the coefficients have Laplace prior distributions. The Laplace distribution is sharply peaked at zero (its first derivative is discontinuous) and it concentrates its probability mass closer to zero than does the normal distribution. This provides an alternative explanation of why lasso tends to set some coefficients to zero, while ridge regression does not.[1]

Convex relaxation interpretation

Lasso can also be viewed as a convex relaxation of the best subset selection regression problem, which is to find the subset of  \leq k covariates that results in the smallest value of the objective function for some fixed  k \leq n , where n is the total number of covariates. The " \ell^0 norm",  \|  \cdot \|_0 , which gives the number of nonzero entries of a vector, is the limiting case of " \ell^p norms", of the form  \textstyle \| x \|_p = \left( \sum_{i=1}^n | x_j | \right)^{1/p} (where the quotation marks signify that these are not really norms for  p < 1 since  \| \cdot \|_p is not convex for  p < 1 , so the triangle inequality does not hold). Therefore, since p = 1 is the smallest value for which the " \ell^p norm" is convex (and therefore actually a norm), lasso is, in some sense, the best convex approximation to the best subset selection problem, since the region defined by  \| x \|_1 \leq t is the convex hull of the region defined by  \| x \|_p \leq t for  p < 1 .

Generalizations of lasso

A number of lasso variants have been created in order to remedy certain limitations of the original technique and to make the method more useful for particular problems. Almost all of these focus on respecting or utilizing different types of dependencies among the covariates. Elastic net regularization adds an additional ridge regression-like penalty which improves performance when the number of predictors is larger than the sample size, allows the method to select strongly correlated variables together, and improves overall prediction accuracy.[4] Group lasso allows groups of related covariates to be selected as a single unit, which can be useful in settings where it does not make sense to include some covariates without others.[5] Further extensions of group lasso to perform variable selection within individual groups (sparse group lasso) and to allow overlap between groups (overlap group lasso) have also been developed.[6][7] Fused lasso can account for the spatial or temporal characteristics of a problem, resulting in estimates that better match the structure of the system being studied.[8] Lasso regularized models can be fit using a variety of techniques including subgradient methods, least-angle regression (LARS), and proximal gradient methods. Determining the optimal value for the regularization parameter is an important part of ensuring that the model performs well; it is typically chosen using cross-validation.

Elastic net

In 2005, Zou and Hastie introduced the elastic net to address several shortcomings of lasso.[4] When p > n (the number of covariates is greater than the sample size) lasso can select only n covariates (even when more are associated with the outcome) and it tends to select only one covariate from any set of highly correlated covariates. Additionally, even when n > p, if the covariates are strongly correlated, ridge regression tends to perform better.

The elastic net extends lasso by adding an additional  \ell^2 penalty term giving

 \min_{ \beta \in \mathbb{R}^p } \left\{ \left\| y - X \beta \right\|_2^2 + \lambda_1 \| \beta \|_1 + \lambda_2 \| \beta \|_2^2 \right\},

which is equivalent to solving

 \min_{ \beta_0, \beta } \left\{ \left\| y - \beta_0 - X \beta \right\|_2^2 \right\} \text{ subject to } ( 1 - \alpha ) \| \beta \|_1 + \alpha \| \beta \|_2^2 \leq t, \text{ where } \alpha = \frac{\lambda_2}{\lambda_1 + \lambda_2}.

Somewhat surprisingly, this problem can be written in a simple lasso form

 \min_{ \beta^* \in \mathbb{R}^p } \left\{ \left\| y^* - X^* \beta^* \right\|_2^2 + \lambda^* \| \beta^* \|_1 \right\}

letting

 X_{(n+p) \times p}^* = ( 1 + \lambda_2 )^{-1/2} \binom{X}{ \lambda_2^{1/2} I_{p \times p} } ,    y_{(n+p)}^* = \binom{y}{0^p}, \qquad \lambda^* = \frac{ \lambda_1 }{ \sqrt{ 1 + \lambda_2 } } ,    \beta^* = \sqrt{ 1 + \lambda_2 } \beta.

Then  \hat{\beta} = \frac{ \hat{\beta}^* }{ \sqrt{ 1 + \lambda_2 } } , which, when the covariates are orthogonal to each other, gives

 \hat{\beta}_j = \frac{ \hat{\beta}^\text{*,OLS}_j }{ \sqrt{ 1 + \lambda_2 } } \max \left( 0, 1 - \frac{ \lambda^* }{ \left| \hat{\beta}^\text{*,OLS}_j \right| } \right) = \frac{ \hat{\beta}^\text{OLS}_j }{ 1 + \lambda_2 } \max \left( 0, 1 - \frac{ \lambda }{ \left| \hat{\beta}^\text{OLS}_j \right| } \right) = ( 1 + \lambda_2 )^{-1} \hat{\beta}^\text{lasso}_j.

So the result of the elastic net penalty is a combination of the effects of the lasso and Ridge penalties.

Returning to the general case, the fact that the penalty function is now strictly convex means that if  x_{(j)} = x_{(k)} ,  \hat{\beta}_j = \hat{\beta}_k , which is a change from lasso.[4] In general, if  \hat{\beta}_j \hat{\beta_k} > 0

 \frac{ | \hat{\beta}_j - \hat{\beta_k} | }{ \| y \|_1 } \leq \lambda_2^{-1} \sqrt{ 2 ( 1 - \rho_{jk} ) }, \text{ where } \rho = X^t X,

is the sample correlation matrix because the  x 's are normalized.

Therefore, highly correlated covariates will tend to have similar regression coefficients, with the degree of similarity depending on both  \| y \|_1 and  \lambda_2 , which is very different from lasso. This phenomenon, in which strongly correlated covariates have similar regression coefficients, is referred to as the grouping effect and is generally considered desirable since, in many applications, such identifying genes associated with a disease, one would like to find all the associated covariates, rather than selecting only one from each set of strongly correlated covariates, as lasso often does.[4] In addition, selecting only a single covariate from each group will typically result in increased prediction error, since the model is less robust (which is why ridge regression often outperforms lasso).

Group lasso

In 2006, Yuan and Lin introduced the group lasso in order to allow predefined groups of covariates to be selected into or out of a model together, so that all the members of a particular group are either included or not included.[5] While there are many settings in which this is useful, perhaps the most obvious is when levels of a categorical variable are coded as a collection of binary covariates. In this case, it often doesn't make sense to include only a few levels of the covariate; the group lasso can ensure that all the variables encoding the categorical covariate are either included or excluded from the model together. Another setting in which grouping is natural is in biological studies. Since genes and proteins often lie in known pathways, an investigator may be more interested in which pathways are related to an outcome than whether particular individual genes are. The objective function for the group lasso is a natural generalization of the standard lasso objective

 \min_{ \beta \in \mathbb{R}^p } \left\{ \left\| y - \sum_{j=1}^J X_j \beta_j \right\|_2^2 + \lambda \sum_{j=1}^J \| \beta_j \|_{K_j} \right\}, \qquad \| z \|_{K_j} = ( z^t K_j z )^{1/2}

where the design matrix  X and covariate vector  \beta have been replaced by a collection of design matrices  X_j and covariate vectors  \beta_j , one for each of the J groups. Additionally, the penalty term is now a sum over  \ell^2 norms defined by the positive definite matrices  K_j . If each covariate is in its own group and  K_j = I , then this reduces to the standard lasso, while if there is only a single group and  K_1 = I , it reduces to ridge regression. Since the penalty reduces to an  \ell^2 norm on the subspaces defined by each group, it cannot select out only some of the covariates from a group, just as ridge regression cannot. However, because the penalty is the sum over the different subspace norms, as in the standard lasso, the constraint has some non-differential points, which correspond to some subspaces being identically zero. Therefore, it can set the coefficient vectors corresponding to some subspaces to zero, while only shrinking others. However, it is possible to extend the group lasso to the so-called sparse group lasso, which can select individual covariates within a group, by adding an additional  \ell^1 penalty to each group subspace.[6] Another extension, group lasso with Overlap allows covariates to be shared between different groups, e.g. if a gene were to occur in two pathways.[7]

Fused lasso

In some cases, the object being studied may have important spatial or temporal structure that must be accounted for during analysis, such as time series or image based data. In 2005, Tibshirani and colleagues introduced the Fused lasso to extend the use of lasso to exactly this type of data.[8] The fused lasso objective function is

 \min_\beta \left\{ \frac{1}{N} \sum_{i=1}^N \left( y_i - x_i^t \beta \right)^2 \right\} \text{ subject to } \sum_{j=1}^p |\beta_j|  \leq t_1 \text{ and } \sum_{j=2}^p |\beta_j - \beta_{j-1}|  \leq t_2.

The first constraint is just the typical lasso constraint, but the second directly penalizes large changes with respect to the temporal or spatial structure, which forces the coefficients to vary in a smooth fashion that reflects the underlying logic of the system being studied. Clustered lasso[9] is a generalization to fused lasso that identifies and groups relevant covariates based on their effects (coefficients). The basic idea is to penalize the differences between the coefficients so that nonzero ones make clusters together. This can be modeled using the following regularization:

 \sum_{i<j}^p |\beta_i - \beta_{j}|  \leq t_2.

In contrast, one can first cluster variables into highly correlated groups, and then extract a single representative covariate from each cluster.[10]

Model fitting

<templatestyles src="Module:Hatnote/styles.css"></templatestyles>

Though the lasso penalty is not differentiable, a wide variety of techniques from convex analysis and optimization theory have been developed to extremize such functions. These include subgradient methods, Least-Angle Regression (LARS), and proximal gradient methods.[11] Subgradient methods, are the natural generalization of traditional methods such as gradient descent and stochastic gradient descent to the case in which the objective function is not differentiable at all points. LARS is a method that is closely tied to lasso models, and in many cases allows them to be fit very efficiently, though it may not perform well in all circumstances. Proximal methods have become popular because of their flexibility and performance and are an area of active research. The choice of method will depend on the particular lasso variant being used, the data, and the available resources. However, proximal methods will generally perform well in most circumstances.

In addition to fitting the parameters, choosing the regularization parameter is also a fundamental part of using lasso. Selecting it well is essential to the performance of lasso since it controls the strength of shrinkage and variable selection, which, in moderation can improve both prediction and interpretability. However, if the regularization becomes too strong, important variables may be left out of the model and coefficients may be shrunk excessively, which can harm both predictive capacity and the inferences drawn about the system being studied. LARS is unique in this regard as it generates complete regularization paths which makes determining the optimal value of the regularization parameter much more straightforward.[11] With other methods, cross-validation is typically used to select the parameter. Additionally, a variety of heuristics related to choosing the regularization and optimization parameters are often used in order to attempt to improve performance further.

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Tibshirani, Robert. 1996. “Regression Shrinkage and Selection via the lasso”. Journal of the Royal Statistical Society. Series B (methodological) 58 (1). Wiley: 267–88. http://www.jstor.org/stable/2346178.
  2. 2.0 2.1 Breiman, Leo. 1995. “Better Subset Regression Using the Nonnegative Garrote”. Technometrics 37 (4). Taylor & Francis, Ltd.: 373–84. doi:10.2307/1269730.
  3. 3.0 3.1 Tibshirani, Robert. 1997. "The lasso Method for Variable Selection in the Cox Model". Statistics in Medicine, Vol. 16, 385—395 (1997)
  4. 4.0 4.1 4.2 4.3 4.4 Zou, Hui, and Trevor Hastie. 2005. “Regularization and Variable Selection via the Elastic Net”. Journal of the Royal Statistical Society. Series B (statistical Methodology) 67 (2). Wiley: 301–20. http://www.jstor.org/stable/3647580.
  5. 5.0 5.1 Yuan, Ming, and Yi Lin. 2006. “Model Selection and Estimation in Regression with Grouped Variables”. Journal of the Royal Statistical Society. Series B (statistical Methodology) 68 (1). Wiley: 49–67. http://www.jstor.org/stable/3647556.
  6. 6.0 6.1 Puig, Arnau Tibau, Ami Wiesel, and Alfred O. Hero. "A Multidimensional Shrinkage-Thresholding Operator". Proceedings of the 15th workshop on Statistical Signal Processing, SSP’09, IEEE, pp. 113–116.
  7. 7.0 7.1 Jacob, Laurent, Guillaume Obozinski, and Jean-Philippe Vert. "Group Lasso with Overlap and Graph LASSO". Appearing in Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada, 2009.
  8. 8.0 8.1 Tibshirani, Robert, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. 2005. “Sparsity and Smoothness via the Fused lasso”. Journal of the Royal Statistical Society. Series B (statistical Methodology) 67 (1). Wiley: 91–108. http://www.jstor.org/stable/3647602.
  9. Lua error in package.lua at line 80: module 'strict' not found.
  10. Lua error in package.lua at line 80: module 'strict' not found.
  11. 11.0 11.1 Efron, Bradley, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. 2004. “Least Angle Regression”. The Annals of Statistics 32 (2). Institute of Mathematical Statistics: 407–51. http://www.jstor.org/stable/3448465.