%% Machine Learning Online Class - Exercise 4 Neural Network Learning

% Instructions
% ------------
%
% This file contains code that helps you get started on the
% linear exercise. You will need to complete the following functions
% in this exericse:
%
% sigmoidGradient.m
% randInitializeWeights.m
% nnCostFunction.m
%
% For this exercise, you will not need to change any code in this file,
% or any other files other than those mentioned above.
% %% Initialization
clear ; close all; clc
%% Setup the parameters you will use for this exercise
input_layer_size = 400; % 20x20 Input Images of Digits
hidden_layer_size = 25; % 25 hidden units
num_labels = 10; % 10 labels, from 1 to 10
% (note that we have mapped "0" to label 10) %% =========== Part 1: Loading and Visualizing Data =============
% We start the exercise by first loading and visualizing the dataset.
% You will be working with a dataset that contains handwritten digits.
% % Load Training Data
fprintf('Loading and Visualizing Data ...\n') load('ex4data1.mat');
m = size(X, 1); % Randomly select 100 data points to display
sel = randperm(size(X, 1));
sel = sel(1:100);
sel(:);      ...

解释

a = X(sel, :);
X(sel, :);
.......
.......
.......
.......
.......
.
.
.
......

解释

displayData(X(sel, :));

fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 2: Loading Parameters ================
% In this part of the exercise, we load some pre-initialized
% neural network parameters. fprintf('\nLoading Saved Neural Network Parameters ...\n') % Load the weights into variables Theta1 and Theta2
load('ex4weights.mat'); % Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)];
https://www.cnblogs.com/liu-wang/p/9466123.html

解释

%% ================ Part 3: Compute Cost (Feedforward) ================
% To the neural network, you should first start by implementing the
% feedforward part of the neural network that returns the cost only. You
% should complete the code in nnCostFunction.m to return cost. After
% implementing the feedforward to compute the cost, you can verify that
% your implementation is correct by verifying that you get the same cost
% as us for the fixed debugging parameters.
%
% We suggest implementing the feedforward cost *without* regularization
% first so that it will be easier for you to debug. Later, in part 4, you
% will get to implement the regularized cost.
%
fprintf('\nFeedforward Using Neural Network ...\n') % Weight regularization parameter (we set this to 0 here).
lambda = 0; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.287629)\n'], J); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part 4: Implement Regularization ===============
% Once your cost function implementation is correct, you should now
% continue to implement the regularization with the cost.
% fprintf('\nChecking Cost Function (w/ Regularization) ... \n') % Weight regularization parameter (we set this to 1 here).
lambda = 1; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.383770)\n'], J); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 5: Sigmoid Gradient ================
% Before you start implementing the neural network, you will first
% implement the gradient for the sigmoid function. You should complete the
% code in the sigmoidGradient.m file.
% fprintf('\nEvaluating sigmoid gradient...\n') g = sigmoidGradient([1 -0.5 0 0.5 1]);
fprintf('Sigmoid gradient evaluated at [1 -0.5 0 0.5 1]:\n ');
fprintf('%f ', g);
fprintf('\n\n'); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 6: Initializing Pameters ================
% In this part of the exercise, you will be starting to implment a two
% layer neural network that classifies digits. You will start by
% implementing a function to initialize the weights of the neural network
% (randInitializeWeights.m) fprintf('\nInitializing Neural Network Parameters ...\n') initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels); % Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)]; %% =============== Part 7: Implement Backpropagation ===============
% Once your cost matches up with ours, you should proceed to implement the
% backpropagation algorithm for the neural network. You should add to the
% code you've written in nnCostFunction.m to return the partial
% derivatives of the parameters.
%
fprintf('\nChecking Backpropagation... \n'); % Check gradients by running checkNNGradients
checkNNGradients; fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part 8: Implement Regularization ===============
% Once your backpropagation implementation is correct, you should now
% continue to implement the regularization with the cost and gradient.
% fprintf('\nChecking Backpropagation (w/ Regularization) ... \n') % Check gradients by running checkNNGradients
lambda = 3;
checkNNGradients(lambda); % Also output the costFunction debugging values
debug_J = nnCostFunction(nn_params, input_layer_size, ...
hidden_layer_size, num_labels, X, y, lambda); fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = 10): %f ' ...
'\n(this value should be about 0.576051)\n\n'], debug_J); fprintf('Program paused. Press enter to continue.\n');
pause; %% =================== Part 8: Training NN ===================
% You have now implemented all the code necessary to train a neural
% network. To train your neural network, we will now use "fmincg", which
% is a function which works similarly to "fminunc". Recall that these
% advanced optimizers are able to train our cost functions efficiently as
% long as we provide them with the gradient computations.
%
fprintf('\nTraining Neural Network... \n') % After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', 50); % You should also try different values of lambda
lambda = 1; % Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, X, y, lambda); % Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options); % Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1)); Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1)); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================= Part 9: Visualize Weights =================
% You can now "visualize" what the neural network is learning by
% displaying the hidden units to see what features they are capturing in
% the data. fprintf('\nVisualizing Neural Network... \n') displayData(Theta1(:, 2:end)); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% ================= Part 10: Implement Predict =================
% After training the neural network, we would like to use it to predict
% the labels. You will now implement the "predict" function to use the
% neural network to predict the labels of the training set. This lets
% you compute the training set accuracy. pred = predict(Theta1, Theta2, X); fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

  

机器学习-反向传播算法(BP)代码实现(matlab)的更多相关文章

  1. 【机器学习】反向传播算法 BP

    知识回顾 1:首先引入一些便于稍后讨论的新标记方法: 假设神经网络的训练样本有m个,每个包含一组输入x和一组输出信号y,L表示神经网络的层数,S表示每层输入的神经元的个数,SL代表最后一层中处理的单元 ...

  2. 神经网络与机器学习 笔记—反向传播算法(BP)

    先看下面信号流图,L=2和M0=M1=M2=M3=3的情况,上面是前向通过,下面部分是反向通过. 1.初始化.假设没有先验知识可用,可以以一个一致分布来随机的挑选突触权值和阈值,这个分布选择为均值等于 ...

  3. 深度神经网络(DNN)反向传播算法(BP)

    在深度神经网络(DNN)模型与前向传播算法中,我们对DNN的模型和前向传播算法做了总结,这里我们更进一步,对DNN的反向传播算法(Back Propagation,BP)做一个总结. 1. DNN反向 ...

  4. 卷积神经网络(CNN)反向传播算法

    在卷积神经网络(CNN)前向传播算法中,我们对CNN的前向传播算法做了总结,基于CNN前向传播算法的基础,我们下面就对CNN的反向传播算法做一个总结.在阅读本文前,建议先研究DNN的反向传播算法:深度 ...

  5. 机器学习 —— 基础整理(七)前馈神经网络的BP反向传播算法步骤整理

    这里把按 [1] 推导的BP算法(Backpropagation)步骤整理一下.突然想整理这个的原因是知乎上看到了一个帅呆了的求矩阵微分的方法(也就是 [2]),不得不感叹作者的功力.[1] 中直接使 ...

  6. 深度学习之反向传播算法(BP)代码实现

    反向传播算法实战 本文仅仅是反向传播算法的实现,不涉及公式推导,如果对反向传播算法公式推导不熟悉,强烈建议查看另一篇文章神经网络之反向传播算法(BP)公式推导(超详细) 我们将实现一个 4 层的全连接 ...

  7. 反向传播(BP)算法理解以及Python实现

    全文参考<机器学习>-周志华中的5.3节-误差逆传播算法:整体思路一致,叙述方式有所不同: 使用如上图所示的三层网络来讲述反向传播算法: 首先需要明确一些概念, 假设数据集\(X=\{x^ ...

  8. 神经网络训练中的Tricks之高效BP(反向传播算法)

    神经网络训练中的Tricks之高效BP(反向传播算法) 神经网络训练中的Tricks之高效BP(反向传播算法) zouxy09@qq.com http://blog.csdn.net/zouxy09 ...

  9. 稀疏自动编码之反向传播算法(BP)

    假设给定m个训练样本的训练集,用梯度下降法训练一个神经网络,对于单个训练样本(x,y),定义该样本的损失函数: 那么整个训练集的损失函数定义如下: 第一项是所有样本的方差的均值.第二项是一个归一化项( ...

随机推荐

  1. 代码格式化工具---prettier配置

    我自己的常用 prettier 配置如下: // .prettierrc 文件 // 这里修改的都是与默认值不同的,没有修改到的就是启用默认值 // .prettierrc 文件是使用 json 格式 ...

  2. spark出现BINLOG_FORMAT = STATEMENT

    错误解决: Caused by: java.sql.SQLException: Cannot execute statement: impossible to write to binary log ...

  3. (转) mysql的分区技术 .

    转:http://blog.csdn.net/feihong247/article/details/8100960 一.概述 当 MySQL的总记录数超过了100万后,会出现性能的大幅度下降吗?答案是 ...

  4. NPAPI插件开发新手容易遇到的问题

    在网上找了一个npdemo的例子,编译了一下在FireFox运行正常,在Chrome下就是不行,也没任何提示. 折腾了好久,最后发现是rc文件 支持语言编码问题 NPAPI插件开发详细记录:用VS20 ...

  5. Aliyun 安装NPM 总是3.5.2 解决方案

    由于默认的命令 阿里云安装的 Node 是 8.x 版本 导致NPM 一直安装的都是 3.5.2 版本,死活升级不上去 最后手动安装指定版本解决 wget -qO- https://deb.nodes ...

  6. 15-Ubuntu-文件和目录命令-查看目录内容-ls-2

    4. ls和通配符的使用 通配符适用的地方:shell命令行或者shell脚本中. 正则表达式适用的地方:字符串处理时,一般有一般正则和Perl正则. 正则表达式与通配符有相同的符号但是意义不同!! ...

  7. input输入内容成可点击状态

    <!DOCTYPE html> <html> <head> <script src="//code.jquery.com/jquery-1.9.1. ...

  8. 2019-4-29-WPF-如何判断一个控件在滚动条的里面是用户可见

    title author date CreateTime categories WPF 如何判断一个控件在滚动条的里面是用户可见 lindexi 2019-4-29 9:42:2 +0800 2019 ...

  9. shell 命令 进程相关

     1. 进程标识号PID 唯一性 pid 为0    内核进程,linux内核创建 pid 为1    init进程,系统最早创建的进程,init是所有用户进程的祖先 2. 查看系统进程信息 (1)[ ...

  10. spring Cache + Redis 开发数据字典以及自定义标签

    一.数据库表结构 1.  分类表:dict_type 2.  子项表:dict_entry 二.页面维护功能示意图: 1.  分类管理 点击子项管理进入子项管理页面 2.子项管理 三.数据字典添加到缓 ...