Ridge Regression and Ridge Regression Kernel
Ridge Regression and Ridge Regression Kernel
Reference:
1. scikit-learn linear_model ridge regression
2. Machine learning for quantum mechanics in a nutshell Authors
3. sample plot ridge path code from #Fabian Pedregosa --
Ridge regression
Ridge regression addresses some of the problems of Ordinary Least Squares by imposing a penalty on the size of coefficients. The ridge coefficients minimize a penalized residual sum of squares:
\[\underset{w}{min} {\left\| Xw-y\right\|_2^{2}} + \lambda\left\|{w}\right\|_2^2\]
Here, \(\lambda \ge 0\) is a complexity parameter that controls the amount of shrinkage: the larger the value of \(\lambda\), the greater the amount of shrinkage and thus the coefficients become robust to collinearity. The figure below show the relationship between the \(\lambda\) and oscillations of the weights.
Ridge Regression Theory
the \(\tilde{x}\) meaning the test dataset, and \(x_i\) means the training data:
\[f(\tilde{x}) = \sum_{i=0}^n \alpha_i k(\tilde{x}, x)\]
Although the dimensionalty of \(\mathbf{Hilbert}\) space can be high, the solution lives in the finite span of the projected training data, enabling a finite representation. The corresponding convex optimization problems is:
\[\underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \sum_{i=1}^n(f(x_i - y_i)^2 + \lambda\left \| f \right \|_{2}^{H} \]
\[\Leftrightarrow \underset{\alpha \varepsilon R^{n}} {\mathrm{argmin}} \left \langle K\alpha-y, K\alpha-y \right \rangle + \lambda \alpha^{T}K\alpha\]
Where \(\left \| f \right \|_{2}^{H}\) is the norm of \(f\) in \(\mathbf{Hilbert}\) space, the complexity of the linear ridge regression model in feature space, and \(K \varepsilon R^{n\times n}, K_{i, j}=k(x_i, x_j)\) is the kernel matrix between training sampels. As before, setting the gradient to 0 yeilds an analytic solution for the regression coefficients:
\[\alpha^{T}K^{2}\alpha - 2\alpha^{T}Ky\ + y^{T}y + \lambda\alpha^{T}K\alpha = 0 \Leftrightarrow K^{2}\alpha + \lambda K\alpha\]
\[\Leftrightarrow \alpha=(K+\lambda I)^{-1}y \]
where \(\lambda\) is a hyperparameter determining strength of regularization. The norm of the coefficient vector \(\mathbf\sigma\) is related to the smooothness and simpler models.
Figure below give a example of a KRR model with Gaussian kernel that demostates the role length-scale hyperparameter \(\sigma\). Although \(\sigma\) is directly related to the regularization term. But it's does control smoothness of the predictor, and effectively regularizes.
The figure above show that kernel ridge regression with Gaussian kernel and different length scales. We learn \(\cos(x)\), the KRR models(dasded lines).The regularization constant \(\lambda\) set to be \(10^{-14}\), a very small \(\mathbf{\sigma}\) fit the train set well but in-between error are very bigger, while a too large \(\sigma\) results in too close to linear model, with both high train error and prediction error.
From above description we can see all information about KRR model is contained in the matrix \(\mathbf K\) of kernel evaluations between training data. Similarly, all info required to predict new inputs \(\tilde{x}\) is contained in the kernel matrix of training set versus prediction data.
Kernel Ridge Regression
The regression coefficients \(\lambda\) are obtained by solving the linear system of equations \((\mathbf K + \lambda \mathbf I)\lambda=\mathbf y\), where \((\mathbf K + \lambda \mathbf I)\) is symmetric and strictly positive definite. To solve this equation we can use Cholesky decomposition \(\mathbf K + \lambda \mathbf I\), where \(\mathbf U\) is upper triangular. One then we break up the \(\mathbf{U^{T}U\lambda=y}\) equantion into 2 equations, the first is \(\mathbf{U^{T}\beta=y}\), and the other is \(\mathbf{U\lambda=\beta}\). Since \(U^{T}\) is lower triangular and \(\mathbf{U}\) is upper triangular, this requires only 2 striaghtforward passes over the data called forward and backward substitution, respectively. For \(\mathbf{U^{T}\beta=y}\), just like below:
\[\mathbf{U_{1, 1}^T \beta_1=y_1} \Leftrightarrow \beta_1=y1/u_{1,1}\]
\[\mathbf{U_{2, 1}^T \beta_1 + \mathbf{U_{2, 2}^T} \beta_1=y_1} \Leftrightarrow \beta_2=(y2 - u_{1, 2}\beta_1)/u_{2,2}\]
\[ ...\]
\[\sum_{j}^i \mathbf{U_{i, j}^T}\beta_j=y_i \Leftrightarrow \beta_i=(y_i-\sum_{j=1}^{i-1}u_{j,i}\beta_j)/u_{i,j}\]
Once the model is trained, then predictions can be made, and the prediction for a new input \(\mathbf{\tilde{x}}\) is the inner product between the vector of coefficients and the vector of corresponding kernel evaluations.
For a test datasets \(\mathbf{\tilde{X}} \varepsilon \Bbb{R^{n \times d}}\), the rows \(\mathbf{\tilde{x},...,\tilde{x_n}}\), and \(\mathbf{L} \varepsilon \Bbb{R^{n \times n}}\) is the kernel matrix of training versus prediction inputs. \(\mathbf{L_{i,j}=k(x_i, \tilde{x_j})}\)
This method has the same order of complexity than an Ordinary Least Squares. In the next notes I will introduce the detail of implementation about KRR.
Ridge Regression and Ridge Regression Kernel的更多相关文章
- 机器学习方法(五):逻辑回归Logistic Regression,Softmax Regression
欢迎转载,转载请注明:本文出自Bin的专栏blog.csdn.net/xbinworld. 技术交流QQ群:433250724,欢迎对算法.技术.应用感兴趣的同学加入. 前面介绍过线性回归的基本知识, ...
- 机器学习---三种线性算法的比较(线性回归,感知机,逻辑回归)(Machine Learning Linear Regression Perceptron Logistic Regression Comparison)
最小二乘线性回归,感知机,逻辑回归的比较: 最小二乘线性回归 Least Squares Linear Regression 感知机 Perceptron 二分类逻辑回归 Binary Logis ...
- why constrained regression and Regularized regression equivalent
problem 1: $\min_{\beta} ~f_\alpha(\beta):=\frac{1}{2}\Vert y-X\beta\Vert^2 +\alpha\Vert \beta\Vert$ ...
- L1,L2范数和正则化 到lasso ridge regression
一.范数 L1.L2这种在机器学习方面叫做正则化,统计学领域的人喊她惩罚项,数学界会喊她范数. L0范数 表示向量xx中非零元素的个数. L1范数 表示向量中非零元素的绝对值之和. L2范数 表 ...
- 【机器学习】Linear least squares, Lasso,ridge regression有何本质区别?
Linear least squares, Lasso,ridge regression有何本质区别? Linear least squares, Lasso,ridge regression有何本质 ...
- Kernel Methods (3) Kernel Linear Regression
Linear Regression 线性回归应该算得上是最简单的一种机器学习算法了吧. 它的问题定义为: 给定训练数据集\(D\), 由\(m\)个二元组\(x_i, y_i\)组成, 其中: \(x ...
- 机器学习技法:06 Support Vector Regression
Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...
- 机器学习技法笔记:06 Support Vector Regression
Roadmap Kernel Ridge Regression Support Vector Regression Primal Support Vector Regression Dual Summ ...
- 【Support Vector Regression】林轩田机器学习技法
上节课讲了Kernel的技巧如何应用到Logistic Regression中.核心是L2 regularized的error形式的linear model是可以应用Kernel技巧的. 这一节,继续 ...
随机推荐
- SQL 语句整理
1. /* SQL CASE 语句写法 * SELECT TABLE1.USER_ID, TABLE1.COMP_CODE, TABLE1.DEPT_CODE, TABLE1.USER_ ...
- 第9课_1_cluster安装
1. 配置网络 vi ifcfg-eth0 ifup ifcfg-eth0 [root@localhost init.d]# ./network restart Shutting down inter ...
- Oracle表管理
/*-----------------------创建和管理表-----------------------------*/一.Orcale之中的数据类型:1.NUMBER.DATE.VARCAHR. ...
- JAVA 初识类加载机制 第13节
JAVA 初识类加载机制 第13节 从这章开始,我们就进入虚拟机类加载机制的学习了.那么什么是类加载呢?当我们写完一个Java类的时候,并不是直接就可以运行的,它还要编译成.class文件,再由虚拟机 ...
- block 的调用 简单代码
#import <Foundation/Foundation.h> #import "Button.h" typedef int (^MySum) (int, int) ...
- Android手机app启动的时候第一个Activity必须是MainActivity吗
原文:Android手机app启动的时候第一个Activity必须是MainActivity吗 Android手机APP启动的第一个Activity是可以自己设置的,不是必须的MainActivity ...
- ANDROID SHAPE画圆形背景_ANDROID实现角标布局
ANDROID SHAPE画圆形背景_ANDROID实现角标布局 <?xml version="1.0" encoding="UTF-8"?> &l ...
- C++赋值运算符、函数调用运算符、下标运算符(“=”、“()”、“[]”)重载
#include <iostream>#include <assert.h>#include <string.h> using namespace std; cla ...
- poj2871
#include <stdio.h> #include <stdlib.h> //法一 int main() { ]; ,tmp; ) { scanf("%lf&qu ...
- [转]Linux Shell History (快速使用Linux命令)
http://www.blogjava.net/lihao336/archive/2010/10/11/334367.html Linux Shell History (快速使用Linux命令) Li ...