%% Machine Learning Online Class - Exercise 4 Neural Network Learning

% Instructions
% ------------
%
% This file contains code that helps you get started on the
% linear exercise. You will need to complete the following functions
% in this exericse:
%
% sigmoidGradient.m
% randInitializeWeights.m
% nnCostFunction.m
%
% For this exercise, you will not need to change any code in this file,
% or any other files other than those mentioned above.
% %% Initialization
clear ; close all; clc
%% Setup the parameters you will use for this exercise
input_layer_size = 400; % 20x20 Input Images of Digits
hidden_layer_size = 25; % 25 hidden units
num_labels = 10; % 10 labels, from 1 to 10
% (note that we have mapped "0" to label 10) %% =========== Part 1: Loading and Visualizing Data =============
% We start the exercise by first loading and visualizing the dataset.
% You will be working with a dataset that contains handwritten digits.
% % Load Training Data
fprintf('Loading and Visualizing Data ...\n') load('ex4data1.mat');
m = size(X, 1); % Randomly select 100 data points to display
sel = randperm(size(X, 1));
sel = sel(1:100);
sel(:);      ...

解释

a = X(sel, :);
X(sel, :);
.......
.......
.......
.......
.......
.
.
.
......

解释

displayData(X(sel, :));

fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 2: Loading Parameters ================
% In this part of the exercise, we load some pre-initialized
% neural network parameters. fprintf('\nLoading Saved Neural Network Parameters ...\n') % Load the weights into variables Theta1 and Theta2
load('ex4weights.mat'); % Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)];
https://www.cnblogs.com/liu-wang/p/9466123.html

解释

%% ================ Part 3: Compute Cost (Feedforward) ================
% To the neural network, you should first start by implementing the
% feedforward part of the neural network that returns the cost only. You
% should complete the code in nnCostFunction.m to return cost. After
% implementing the feedforward to compute the cost, you can verify that
% your implementation is correct by verifying that you get the same cost
% as us for the fixed debugging parameters.
%
% We suggest implementing the feedforward cost *without* regularization
% first so that it will be easier for you to debug. Later, in part 4, you
% will get to implement the regularized cost.
%
fprintf('\nFeedforward Using Neural Network ...\n') % Weight regularization parameter (we set this to 0 here).
lambda = 0; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.287629)\n'], J); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part 4: Implement Regularization ===============
% Once your cost function implementation is correct, you should now
% continue to implement the regularization with the cost.
% fprintf('\nChecking Cost Function (w/ Regularization) ... \n') % Weight regularization parameter (we set this to 1 here).
lambda = 1; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.383770)\n'], J); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 5: Sigmoid Gradient ================
% Before you start implementing the neural network, you will first
% implement the gradient for the sigmoid function. You should complete the
% code in the sigmoidGradient.m file.
% fprintf('\nEvaluating sigmoid gradient...\n') g = sigmoidGradient([1 -0.5 0 0.5 1]);
fprintf('Sigmoid gradient evaluated at [1 -0.5 0 0.5 1]:\n ');
fprintf('%f ', g);
fprintf('\n\n'); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part 6: Initializing Pameters ================
% In this part of the exercise, you will be starting to implment a two
% layer neural network that classifies digits. You will start by
% implementing a function to initialize the weights of the neural network
% (randInitializeWeights.m) fprintf('\nInitializing Neural Network Parameters ...\n') initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels); % Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)]; %% =============== Part 7: Implement Backpropagation ===============
% Once your cost matches up with ours, you should proceed to implement the
% backpropagation algorithm for the neural network. You should add to the
% code you've written in nnCostFunction.m to return the partial
% derivatives of the parameters.
%
fprintf('\nChecking Backpropagation... \n'); % Check gradients by running checkNNGradients
checkNNGradients; fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part 8: Implement Regularization ===============
% Once your backpropagation implementation is correct, you should now
% continue to implement the regularization with the cost and gradient.
% fprintf('\nChecking Backpropagation (w/ Regularization) ... \n') % Check gradients by running checkNNGradients
lambda = 3;
checkNNGradients(lambda); % Also output the costFunction debugging values
debug_J = nnCostFunction(nn_params, input_layer_size, ...
hidden_layer_size, num_labels, X, y, lambda); fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = 10): %f ' ...
'\n(this value should be about 0.576051)\n\n'], debug_J); fprintf('Program paused. Press enter to continue.\n');
pause; %% =================== Part 8: Training NN ===================
% You have now implemented all the code necessary to train a neural
% network. To train your neural network, we will now use "fmincg", which
% is a function which works similarly to "fminunc". Recall that these
% advanced optimizers are able to train our cost functions efficiently as
% long as we provide them with the gradient computations.
%
fprintf('\nTraining Neural Network... \n') % After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', 50); % You should also try different values of lambda
lambda = 1; % Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, X, y, lambda); % Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options); % Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
hidden_layer_size, (input_layer_size + 1)); Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
num_labels, (hidden_layer_size + 1)); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================= Part 9: Visualize Weights =================
% You can now "visualize" what the neural network is learning by
% displaying the hidden units to see what features they are capturing in
% the data. fprintf('\nVisualizing Neural Network... \n') displayData(Theta1(:, 2:end)); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% ================= Part 10: Implement Predict =================
% After training the neural network, we would like to use it to predict
% the labels. You will now implement the "predict" function to use the
% neural network to predict the labels of the training set. This lets
% you compute the training set accuracy. pred = predict(Theta1, Theta2, X); fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

  

机器学习-反向传播算法(BP)代码实现(matlab)的更多相关文章

  1. 【机器学习】反向传播算法 BP

    知识回顾 1:首先引入一些便于稍后讨论的新标记方法: 假设神经网络的训练样本有m个,每个包含一组输入x和一组输出信号y,L表示神经网络的层数,S表示每层输入的神经元的个数,SL代表最后一层中处理的单元 ...

  2. 神经网络与机器学习 笔记—反向传播算法(BP)

    先看下面信号流图,L=2和M0=M1=M2=M3=3的情况,上面是前向通过,下面部分是反向通过. 1.初始化.假设没有先验知识可用,可以以一个一致分布来随机的挑选突触权值和阈值,这个分布选择为均值等于 ...

  3. 深度神经网络(DNN)反向传播算法(BP)

    在深度神经网络(DNN)模型与前向传播算法中,我们对DNN的模型和前向传播算法做了总结,这里我们更进一步,对DNN的反向传播算法(Back Propagation,BP)做一个总结. 1. DNN反向 ...

  4. 卷积神经网络(CNN)反向传播算法

    在卷积神经网络(CNN)前向传播算法中,我们对CNN的前向传播算法做了总结,基于CNN前向传播算法的基础,我们下面就对CNN的反向传播算法做一个总结.在阅读本文前,建议先研究DNN的反向传播算法:深度 ...

  5. 机器学习 —— 基础整理(七)前馈神经网络的BP反向传播算法步骤整理

    这里把按 [1] 推导的BP算法(Backpropagation)步骤整理一下.突然想整理这个的原因是知乎上看到了一个帅呆了的求矩阵微分的方法(也就是 [2]),不得不感叹作者的功力.[1] 中直接使 ...

  6. 深度学习之反向传播算法(BP)代码实现

    反向传播算法实战 本文仅仅是反向传播算法的实现,不涉及公式推导,如果对反向传播算法公式推导不熟悉,强烈建议查看另一篇文章神经网络之反向传播算法(BP)公式推导(超详细) 我们将实现一个 4 层的全连接 ...

  7. 反向传播(BP)算法理解以及Python实现

    全文参考<机器学习>-周志华中的5.3节-误差逆传播算法:整体思路一致,叙述方式有所不同: 使用如上图所示的三层网络来讲述反向传播算法: 首先需要明确一些概念, 假设数据集\(X=\{x^ ...

  8. 神经网络训练中的Tricks之高效BP(反向传播算法)

    神经网络训练中的Tricks之高效BP(反向传播算法) 神经网络训练中的Tricks之高效BP(反向传播算法) zouxy09@qq.com http://blog.csdn.net/zouxy09 ...

  9. 稀疏自动编码之反向传播算法(BP)

    假设给定m个训练样本的训练集,用梯度下降法训练一个神经网络,对于单个训练样本(x,y),定义该样本的损失函数: 那么整个训练集的损失函数定义如下: 第一项是所有样本的方差的均值.第二项是一个归一化项( ...

随机推荐

  1. 56 Marvin: 一个支持GPU加速、且不依赖其他库(除cuda和cudnn)的轻量化多维深度学习(deep learning)框架介绍

    0 引言 Marvin是普林斯顿视觉实验室(PrincetonVision)于2015年提出的轻量化GPU加速的多维深度学习网络框架.该框架采用纯c/c++编写,除了cuda和cudnn以外,不依赖其 ...

  2. 高可用开源方案Heartbeat vs Keepalived

    转:http://www.kuqin.com/shuoit/20140623/340745.html 最近因为项目需要,简单的试用了两款高可用开源方案:Keepalived和Heartbeat.两者都 ...

  3. hdu多校第五场1007 (hdu6630) permutation 2 dp

    题意: 给你n个数,求如下限制条件下的排列数:1,第一位必须是x,2,最后一位必须是y,3,相邻两位之差小于等于2 题解: 如果x<y,那么考虑把整个数列翻转过来,减少讨论分支. 设dp[n]为 ...

  4. POJ2449-A*算法-第k短路

    (有任何问题欢迎留言或私聊 && 欢迎交流讨论哦 题意:传送门  原题目描述在最下面.  给你一个有向图,求指定节点间的第k短路. 思路:  先反向跑出从终点开始的到每个节点的最短距离 ...

  5. 剑指offer——10跳台阶演变

    题目描述 一只青蛙一次可以跳上1级台阶,也可以跳上2级……它也可以跳上n级.求该青蛙跳上一个n级的台阶总共有多少种跳法.   题解: 纯找规律题:   class Solution { public: ...

  6. css3的选择器

    先来做一下准备工作 页面的效果: 看到上面的框框了吗?我们就是通过给这些框框添加背景色的方式,来让大家,了解css3的选择器的效果,下面正式开始: 关联选择器 E1~E2 选择 E1 后面的兄弟 E2 ...

  7. iOS开发系列-Shell脚本生成IPA

    概述 在公司开发到了测试阶段需要频繁打包交付给测试,看似简单的工作,重复的流程总是感觉不是那么好,我们可以借助苹果提供的编译指令编译项目. 自动化脚本编译打包IPA 常见的iOS项目就是基于xcode ...

  8. C# 调用java的Webservice时关于非string类型处理

    比如webservice地址是:http://wdft.com:80/services/getOrderService1.0?wsdl 方法是:getOrder 1.首先添加引用: 2. 3.引用完成 ...

  9. npm ERR! Failed at the gff@1.0.0 start script.

    code ELIFECYCLE npm ERR! errno 1 npm ERR! gff@1.0.0 start: `node build/dev-server.js` npm ERR! Exit ...

  10. Ubuntu Apache vhost不执行php小记

    运行环境: Ubuntu : 16.04 PHP: 5.6.36 Apache: 2.4.18 出现/var/www/html 文件夹下的 php文件能够执行 vhost 配置文件的DocumentR ...