【转】Derivation of the Normal Equation for linear regression
I was going through the Coursera "Machine Learning" course, and in the section on multivariate linear regression something caught my eye. Andrew Ng presented the Normal Equation as an analytical solution to the linear regression problem with a least-squares cost function. He mentioned that in some cases (such as for small feature sets) using it is more effective than applying gradient descent; unfortunately, he left its derivation out.
Here I want to show how the normal equation is derived.
First, some terminology. The following symbols are compatible with the machine learning course, not with the exposition of the normal equation on Wikipedia and other sites - semantically it's all the same, just the symbols are different.
Given the hypothesis function:
We'd like to minimize the least-squares cost:
Where is the i-th sample (from a set of m samples) and is the i-th expected result.
To proceed, we'll represent the problem in matrix notation; this is natural, since we essentially have a system of linear equations here. The regression coefficients we're looking for are the vector:
Each of the m input samples is similarly a column vector with n+1 rows, being 1 for convenience. So we can now rewrite the hypothesis function as:
When this is summed over all samples, we can dip further into matrix notation. We'll define the "design matrix" X (uppercase X) as a matrix of m rows, in which each row is the i-th sample (the vector ). With this, we can rewrite the least-squares cost as following, replacing the explicit sum by matrix multiplication:
Now, using some matrix transpose identities, we can simplify this a bit. I'll throw the part away since we're going to compare a derivative to zero anyway:
Note that is a vector, and so is y. So when we multiply one by another, it doesn't matter what the order is (as long as the dimensions work out). So we can further simplify:
Recall that here is our unknown. To find where the above function has a minimum, we will derive by and compare to 0. Deriving by a vector may feel uncomfortable, but there's nothing to worry about. Recall that here we only use matrix notation to conveniently represent a system of linear formulae. So we derive by each component of the vector, and then combine the resulting derivatives into a vector again. The result is:
Or:
[Update 27-May-2015: I've written another post that explains in more detail how these derivatives are computed.]
Now, assuming that the matrix is invertible, we can multiply both sides by and get:
Which is the normal equation.
【转】Derivation of the Normal Equation for linear regression的更多相关文章
- (三)用Normal Equation拟合Liner Regression模型
继续考虑Liner Regression的问题,把它写成如下的矩阵形式,然后即可得到θ的Normal Equation. Normal Equation: θ=(XTX)-1XTy 当X可逆时,(XT ...
- CS229 3.用Normal Equation拟合Liner Regression模型
继续考虑Liner Regression的问题,把它写成如下的矩阵形式,然后即可得到θ的Normal Equation. Normal Equation: θ=(XTX)-1XTy 当X可逆时,(XT ...
- Linear regression with multiple variables(多特征的线型回归)算法实例_梯度下降解法(Gradient DesentMulti)以及正规方程解法(Normal Equation)
,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, ,, , ...
- machine learning (7)---normal equation相对于gradient descent而言求解linear regression问题的另一种方式
Normal equation: 一种用来linear regression问题的求解Θ的方法,另一种可以是gradient descent 仅适用于linear regression问题的求解,对其 ...
- 机器学习入门:Linear Regression与Normal Equation -2017年8月23日22:11:50
本文会讲到: (1)另一种线性回归方法:Normal Equation: (2)Gradient Descent与Normal Equation的优缺点: 前面我们通过Gradient Desce ...
- 5种方法推导Normal Equation
引言: Normal Equation 是最基础的最小二乘方法.在Andrew Ng的课程中给出了矩阵推到形式,本文将重点提供几种推导方式以便于全方位帮助Machine Learning用户学习. N ...
- Normal Equation Algorithm
和梯度下降法一样,Normal Equation(正规方程法)算法也是一种线性回归算法(Linear Regression Algorithm).与梯度下降法通过一步步计算来逐步靠近最佳θ值不同,No ...
- coursera机器学习笔记-多元线性回归,normal equation
#对coursera上Andrew Ng老师开的机器学习课程的笔记和心得: #注:此笔记是我自己认为本节课里比较重要.难理解或容易忘记的内容并做了些补充,并非是课堂详细笔记和要点: #标记为<补 ...
- Normal Equation
一.Normal Equation 我们知道梯度下降在求解最优参数\(\theta\)过程中需要合适的\(\alpha\),并且需要进行多次迭代,那么有没有经过简单的数学计算就得到参数\(\theta ...
随机推荐
- PuTTY?Bash?Out了!!!终端应该这么玩~
由于语言的障碍,国内一直存在一个问题,就是新技术引入太慢.比如PuTTY,其实已停止维护N久了,但大部分人却仍然在用(包括之前的我).比如Bash,明知有那么多的问题,却一直没有什么想法,似乎Linu ...
- OpenGL ES 3.0 顶点缓冲区VBO使用
一般情况下数据都是有CPU从RAM取数据 然后传给GPU去处理,相对于GPU速度要慢一些. 使用VBO技术 可以把数据存储到GPU的内存空间中,这样GPU可以直接从GPU的内存中取得数据进行处理 速度 ...
- CRT detected that the application wrote to memory after end of heap buffer.
很多人的解释都不一样, 我碰到的问题是,开辟的内存空间小于操作的内存空间.也就是说,我free的内存越界了. 这是我开辟链表结构体内存的代码: PNODE Create() { int len; / ...
- Hive学习之五 《Hive进阶—UDF操作案例》 详解
hive—UDF操作 udf的操作过程: 在HIVE会话中add 自定义函数的jar文件,然后创建function,继而使用函数. 下面就以下面课题为例: 课题:统计每个活动的PV和UV 一.Java ...
- ajax调试兼容性
<script type="text/javascript"> if(typeof ActiveXObject!= 'undefined'){ var x = new ...
- jQuery图片滑动
一个非常简单实用的jQuery插件 可以用在页面的顶部广告展示 http://slidesjs.com/ 一个需要注意的问题, 就是在手机等客户端(IOS8以上), 使用此插件时, 经常会触发插件的r ...
- OOCSS学习(一)
OOCSS —— 面向对象CSS 搜集一些该搜集的,然后汇总一下. 1.OOCSS 概念篇: 1)什么是面向对象 确定“对象”,并给这个对象创建CSS样式规则. 2)面向对象的CSS理论 OOCSS最 ...
- · HTML使用Viewport
· HTML使用ViewportViewport可以加速页面的渲染,请使用以下代码<meta name=”viewport” content=”width=device-width, initi ...
- java常用用代码
/** *Java获取IP代码 */ import java.awt.GridLayout; import java.awt.event.ActionEvent; import java.awt.ev ...
- django自带User管理中添加自己的字段方法
#coding=utf-8 from django.db import models from django.contrib.auth.models import User, make_passwor ...