Ridge and Lasso Regression
Learn about ridge and lasso regression, their comparison, and the importance of their contours’ intersection with MSE.
We'll cover the following...
In the previous lesson, we saw how regularization helps control overfitting by penalizing large weights and balancing the bias variance trade-off. We also introduced L1 and L2 penalties and discussed their general effects on model behavior. Now, we focus specifically on ridge and lasso regression and examine how these penalties change the solution. Using the same linear model and squared loss, we compare ridge and lasso through their objective functions and visualize their behavior using MSE contours. This geometric perspective helps explain why ridge shrinks all coefficients, while lasso can drive some coefficients exactly to zero.
Ridge and Lasso objectives
Both Ridge and Lasso regression are special forms of regularized linear regression. They use the simplest model type (linear model) and the standard way to measure error (squared loss), differing only in their regularization penalty.
The core model and loss function
Before introducing the penalty, we must define the model that makes a prediction and the loss function that measures the error.
Linear model ()
A linear model assumes the output (, the prediction) is a simple, weighted sum of the inputs (). The goal is to find the best set of weights () that connect the inputs to the output.
- We have training examples, . Each input has features.
- The model expression:
- is the intercept (or bias).
- to are the slopes or feature weights.
To simplify the math, we often combine with the other weights by adding a constant to the start of the feature vector: ...