一、理论

二、数据集

6.1101,17.592
5.5277,9.1302
8.5186,13.662
7.0032,11.854
5.8598,6.8233
8.3829,11.886
7.4764,4.3483
8.5781,
6.4862,6.5987
5.0546,3.8166
5.7107,3.2522
14.164,15.505
5.734,3.1551
8.4084,7.2258
5.6407,0.71618
5.3794,3.5129
6.3654,5.3048
5.1301,0.56077
6.4296,3.6518
7.0708,5.3893
6.1891,3.1386
20.27,21.767
5.4901,4.263
6.3261,5.1875
5.5649,3.0825
18.945,22.638
12.828,13.501
10.957,7.0467
13.176,14.692
22.203,24.147
5.2524,-1.22
6.5894,5.9966
9.2482,12.134
5.8918,1.8495
8.2111,6.5426
7.9334,4.5623
8.0959,4.1164
5.6063,3.3928
12.836,10.117
6.3534,5.4974
5.4069,0.55657
6.8825,3.9115
11.708,5.3854
5.7737,2.4406
7.8247,6.7318
7.0931,1.0463
5.0702,5.1337
5.8014,1.844
11.7,8.0043
5.5416,1.0179
7.5402,6.7504
5.3077,1.8396
7.4239,4.2885
7.6031,4.9981
6.3328,1.4233
6.3589,-1.4211
6.2742,2.4756
5.6397,4.6042
9.3102,3.9624
9.4536,5.4141
8.8254,5.1694
5.1793,-0.74279
21.279,17.929
14.908,12.054
18.959,17.054
7.2182,4.8852
8.2951,5.7442
10.236,7.7754
5.4994,1.0173
20.341,20.992
10.136,6.6799
7.3345,4.0259
6.0062,1.2784
7.2259,3.3411
5.0269,-2.6807
6.5479,0.29678
7.5386,3.8845
5.0365,5.7014
10.274,6.7526
5.1077,2.0576
5.7292,0.47953
5.1884,0.20421
6.3557,0.67861
9.7687,7.5435
6.5159,5.3436
8.5172,4.2415
9.1802,6.7981
6.002,0.92695
5.5204,0.152
5.0594,2.8214
5.7077,1.8451
7.6366,4.2959
5.8707,7.2029
5.3054,1.9869
8.2934,0.14454
13.394,9.0551
5.4369,0.61705

三、代码实现

clear  all;
clc;
data = load('ex1data1.txt');
X = data(:, 1); y = data(:, 2);
m = length(y); % number of training examples
plot(X,y,'rx'); %% =================== Part 3: Gradient descent ===================
fprintf('Running Gradient Descent ...\n') %为什么加上一列1,为了算J时候,theta0 乘以1
X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
theta = zeros(2, 1); % initialize fitting parameters % Some gradient descent settings
iterations = 1500;
alpha = 0.01; % compute and display initial cost
computeCost(X, y, theta) % run gradient descent
[theta, J_history]= gradientDescent(X, y, theta, alpha, iterations); hold on; % keep previous plot visible
plot(X(:,2), X*theta, '-')
legend('Training data', 'Linear regression')
hold off % don't overlay any more plots on this figure % Predict values for population sizes of 35,000 and 70,000
predict1 = [1, 3.5] *theta;
fprintf('For population = 35,000, we predict a profit of %f\n',...
predict1*10000);
predict2 = [1, 7] * theta;
fprintf('For population = 70,000, we predict a profit of %f\n',...
predict2*10000); % Grid over which we will calculate J
theta0_vals = linspace(-10, 10, 100);
theta1_vals = linspace(-1, 4, 100); % initialize J_vals to a matrix of 0's
J_vals = zeros(length(theta0_vals), length(theta1_vals)); % Fill out J_vals
for i = 1:length(theta0_vals)
for j = 1:length(theta1_vals)
t = [theta0_vals(i); theta1_vals(j)];
J_vals(i,j) = computeCost(X, y, t);
end
end % Because of the way meshgrids work in the surf command, we need to
% transpose J_vals before calling surf, or else the axes will be flipped
J_vals = J_vals';
% Surface plot
figure;
surf(theta0_vals, theta1_vals, J_vals)
xlabel('\theta_0'); ylabel('\theta_1'); % Contour plot
figure;
% Plot J_vals as 15 contours spaced logarithmically between 0.01 and 100
%以10为底的指数 logspace(-2, 3, 20)坐标值标注范围以及间距
contour(theta0_vals, theta1_vals, J_vals, logspace(-2, 3, 20))
xlabel('\theta_0'); ylabel('\theta_1');
hold on;
plot(theta(1), theta(2), 'rx', 'MarkerSize', 10, 'LineWidth', 2);

  ...................

function J = computeCost(X, y, theta)

m = length(y); % number of training examples
J = 0; for i=1:m
J = J +(theta(1)*X(i,1) + theta(2)*X(i,2) - y(i))^2;
end
% 除以2m是为了在更新参数的时候 好算 2因为J是二次,求骗到后产生系数2,
%m是为了不让J 过大(i=1:m已经是求偏导第二部的m、项了)
J = J/(m*2);
end

  ......

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)

m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
J_1 = 0;% 偏导数J_1, J_2
J_2 = 0;
for iter = 1:num_iters
for i = 1:m
J_1 = J_1 + theta(1)*X(i,1) + theta(2)*X(i,2) - y(i);
J_2 = J_2 + (theta(1)*X(i,1) + theta(2)*X(i,2) - y(i)) * X(i,2);
end
%J中的m 没有在上面的for内除,因为只除以一次就够了
J_1 = J_1/m;
J_2 = J_2/m;
% temp1 = theta(1) - alpha * J_1;
% temp2 = theta(2) - alpha * J_2;
% theta(1) = temp1;
% theta(2) = temp2;
theta(1) = theta(1) - alpha * J_1;
theta(2) = theta(2) - alpha * J_2;
J_history(iter) = computeCost(X, y, theta);
% save J_history J_history
end
end

四、运行结果

Matlab实现单变量线性回归的更多相关文章

  1. 机器学习之单变量线性回归(Linear Regression with One Variable)

    1. 模型表达(Model Representation) 我们的第一个学习算法是线性回归算法,让我们通过一个例子来开始.这个例子用来预测住房价格,我们使用一个数据集,该数据集包含俄勒冈州波特兰市的住 ...

  2. Coursera《machine learning》--(2)单变量线性回归(Linear Regression with One Variable)

    本笔记为Coursera在线课程<Machine Learning>中的单变量线性回归章节的笔记. 2.1 模型表示 参考视频: 2 - 1 - Model Representation ...

  3. 机器学习(二)--------单变量线性回归(Linear Regression with One Variable)

    面积与房价 训练集 (Training Set) Size       Price 2104       460 852         178 ...... m代表训练集中实例的数量x代表输入变量 ...

  4. Ng第二课:单变量线性回归(Linear Regression with One Variable)

    二.单变量线性回归(Linear Regression with One Variable) 2.1  模型表示 2.2  代价函数 2.3  代价函数的直观理解 2.4  梯度下降 2.5  梯度下 ...

  5. python 单变量线性回归

      单变量线性回归(Linear Regression with One Variable)¶ In [54]: #初始化工作 import random import numpy as np imp ...

  6. 斯坦福第二课:单变量线性回归(Linear Regression with One Variable)

    二.单变量线性回归(Linear Regression with One Variable) 2.1  模型表示 2.2  代价函数 2.3  代价函数的直观理解 I 2.4  代价函数的直观理解 I ...

  7. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 2_Linear regression with one variable 单变量线性回归

    Lecture2   Linear regression with one variable  单变量线性回归 2.1 模型表示 Model Representation 2.1.1  线性回归 Li ...

  8. 机器学习 (一) 单变量线性回归 Linear Regression with One Variable

    文章内容均来自斯坦福大学的Andrew Ng教授讲解的Machine Learning课程,本文是针对该课程的个人学习笔记,如有疏漏,请以原课程所讲述内容为准.感谢博主Rachel Zhang的个人笔 ...

  9. 【Python】机器学习之单变量线性回归 利用正规方程找到合适的参数值

    [Python]机器学习之单变量线性回归 利用正规方程找到合适的参数值 本次作业来自吴恩达机器学习. 你是一个餐厅的老板,你想在其他城市开分店,所以你得到了一些数据(数据在本文最下方),数据中包括不同 ...

随机推荐

  1. Ajax的利弊

    ajax的优点  1.最大的一点是页面无刷新,在页面内与服务器通信,给用户的体验非常好. 2.使用异步方式与服务器通信,不需要打断用户的操作,具有更加迅速的响应能力. 3.可以把以前一些服务器负担的工 ...

  2. ThinkPHP之中的getField、Find、select、返回数据类型详解(ThinkPHP之中所有数据读取了)

    小李子:用于演示作用的数据库表:customers 官方解读: “ 读取数据集其实就是获取数据表中的多行记录(以及关联数据),使用select方法 ” $customers=D('customers' ...

  3. php 递归 适合刚刚接解递归的人看

    递归,就是自己调用自己,当满足某条件时层层退出(后进先出). --------------------------------------------------------------------- ...

  4. jQuery学习笔记(5)--表单域获得焦点和失去焦点样式变化

    <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> &l ...

  5. Python学习教程(learning Python)--3.2 if-else分支语句

    if-else分支语句结构的特点是当conditon条件满足时,执行if下的语句块,当condition条件不满足时执行else下的语句块,也就是说根据条件来控制让某些语句执行,某些语句不被执行. i ...

  6. xcode plugin

    http://alcatraz.io/ https://github.com/macoscope/CodePilot prepo  curl -fsSL https://raw.githubuserc ...

  7. 为了android sdk下载,必须修改hosts

    #Download 下载 203.208.46.146 dl.google.com 203.208.46.146 dl-ssl.google.com #Groups 203.208.46.146 gr ...

  8. ES5 vs ES6

    ES5中 var React = require('react-native'); ES6中 import React from 'react-native'; .babelrc文件中添加一下内容 { ...

  9. ssh公钥免密码登录

    1,生成公钥 ssh-keygen -t rsa 2,上传至服务器 将个人电脑的公钥添加到服务器 cat id_rsa.pub >> ~/.ssh/authorized_keys 3,本地 ...

  10. hi3531播放1080p60f, 延迟越来越大的问题与解决办法

    问题 hi3531播放1080p60f, 延迟越来越大 左边屏幕是ffplay播放的,右边屏幕是3531播放的 数据是udp组播 mpegts, h264 12M码流 原因 经过测试发现: 解码器中缓 ...