1. %% Machine Learning Online Class - Exercise 4 Neural Network Learning
  2.  
  3. % Instructions
  4. % ------------
  5. %
  6. % This file contains code that helps you get started on the
  7. % linear exercise. You will need to complete the following functions
  8. % in this exericse:
  9. %
  10. % sigmoidGradient.m
  11. % randInitializeWeights.m
  12. % nnCostFunction.m
  13. %
  14. % For this exercise, you will not need to change any code in this file,
  15. % or any other files other than those mentioned above.
  16. %
  17.  
  18. %% Initialization
  19. clear ; close all; clc
  1. %% Setup the parameters you will use for this exercise
  2. input_layer_size = 400; % 20x20 Input Images of Digits
  3. hidden_layer_size = 25; % 25 hidden units
  4. num_labels = 10; % 10 labels, from 1 to 10
  5. % (note that we have mapped "0" to label 10)
  6.  
  7. %% =========== Part 1: Loading and Visualizing Data =============
  8. % We start the exercise by first loading and visualizing the dataset.
  9. % You will be working with a dataset that contains handwritten digits.
  10. %
  11.  
  12. % Load Training Data
  13. fprintf('Loading and Visualizing Data ...\n')
  14.  
  15. load('ex4data1.mat');
  16. m = size(X, 1);
  17.  
  18. % Randomly select 100 data points to display
  19. sel = randperm(size(X, 1));
  20. sel = sel(1:100);
  1. sel(:); ...

解释

  1. a = X(sel, :);
  1. X(sel, :);
  2. .......
  3. .......
  4. .......
  5. .......
  6. .......
  7. .
  8. .
  9. .
  10. ......

解释

  1. displayData(X(sel, :));
  2.  
  3. fprintf('Program paused. Press enter to continue.\n');
  4. pause;
  5.  
  6. %% ================ Part 2: Loading Parameters ================
  7. % In this part of the exercise, we load some pre-initialized
  8. % neural network parameters.
  9.  
  10. fprintf('\nLoading Saved Neural Network Parameters ...\n')
  11.  
  12. % Load the weights into variables Theta1 and Theta2
  13. load('ex4weights.mat');
  14.  
  15. % Unroll parameters
  16. nn_params = [Theta1(:) ; Theta2(:)];
  1. https://www.cnblogs.com/liu-wang/p/9466123.html

解释

  1. %% ================ Part 3: Compute Cost (Feedforward) ================
  2. % To the neural network, you should first start by implementing the
  3. % feedforward part of the neural network that returns the cost only. You
  4. % should complete the code in nnCostFunction.m to return cost. After
  5. % implementing the feedforward to compute the cost, you can verify that
  6. % your implementation is correct by verifying that you get the same cost
  7. % as us for the fixed debugging parameters.
  8. %
  9. % We suggest implementing the feedforward cost *without* regularization
  10. % first so that it will be easier for you to debug. Later, in part 4, you
  11. % will get to implement the regularized cost.
  12. %
  13. fprintf('\nFeedforward Using Neural Network ...\n')
  14.  
  15. % Weight regularization parameter (we set this to 0 here).
  16. lambda = 0;
  17.  
  18. J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
  19. num_labels, X, y, lambda);
  20.  
  21. fprintf(['Cost at parameters (loaded from ex4weights): %f '...
  22. '\n(this value should be about 0.287629)\n'], J);
  23.  
  24. fprintf('\nProgram paused. Press enter to continue.\n');
  25. pause;
  26.  
  27. %% =============== Part 4: Implement Regularization ===============
  28. % Once your cost function implementation is correct, you should now
  29. % continue to implement the regularization with the cost.
  30. %
  31.  
  32. fprintf('\nChecking Cost Function (w/ Regularization) ... \n')
  33.  
  34. % Weight regularization parameter (we set this to 1 here).
  35. lambda = 1;
  36.  
  37. J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
  38. num_labels, X, y, lambda);
  39.  
  40. fprintf(['Cost at parameters (loaded from ex4weights): %f '...
  41. '\n(this value should be about 0.383770)\n'], J);
  42.  
  43. fprintf('Program paused. Press enter to continue.\n');
  44. pause;
  45.  
  46. %% ================ Part 5: Sigmoid Gradient ================
  47. % Before you start implementing the neural network, you will first
  48. % implement the gradient for the sigmoid function. You should complete the
  49. % code in the sigmoidGradient.m file.
  50. %
  51.  
  52. fprintf('\nEvaluating sigmoid gradient...\n')
  53.  
  54. g = sigmoidGradient([1 -0.5 0 0.5 1]);
  55. fprintf('Sigmoid gradient evaluated at [1 -0.5 0 0.5 1]:\n ');
  56. fprintf('%f ', g);
  57. fprintf('\n\n');
  58.  
  59. fprintf('Program paused. Press enter to continue.\n');
  60. pause;
  61.  
  62. %% ================ Part 6: Initializing Pameters ================
  63. % In this part of the exercise, you will be starting to implment a two
  64. % layer neural network that classifies digits. You will start by
  65. % implementing a function to initialize the weights of the neural network
  66. % (randInitializeWeights.m)
  67.  
  68. fprintf('\nInitializing Neural Network Parameters ...\n')
  69.  
  70. initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
  71. initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels);
  72.  
  73. % Unroll parameters
  74. initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)];
  75.  
  76. %% =============== Part 7: Implement Backpropagation ===============
  77. % Once your cost matches up with ours, you should proceed to implement the
  78. % backpropagation algorithm for the neural network. You should add to the
  79. % code you've written in nnCostFunction.m to return the partial
  80. % derivatives of the parameters.
  81. %
  82. fprintf('\nChecking Backpropagation... \n');
  83.  
  84. % Check gradients by running checkNNGradients
  85. checkNNGradients;
  86.  
  87. fprintf('\nProgram paused. Press enter to continue.\n');
  88. pause;
  89.  
  90. %% =============== Part 8: Implement Regularization ===============
  91. % Once your backpropagation implementation is correct, you should now
  92. % continue to implement the regularization with the cost and gradient.
  93. %
  94.  
  95. fprintf('\nChecking Backpropagation (w/ Regularization) ... \n')
  96.  
  97. % Check gradients by running checkNNGradients
  98. lambda = 3;
  99. checkNNGradients(lambda);
  100.  
  101. % Also output the costFunction debugging values
  102. debug_J = nnCostFunction(nn_params, input_layer_size, ...
  103. hidden_layer_size, num_labels, X, y, lambda);
  104.  
  105. fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = 10): %f ' ...
  106. '\n(this value should be about 0.576051)\n\n'], debug_J);
  107.  
  108. fprintf('Program paused. Press enter to continue.\n');
  109. pause;
  110.  
  111. %% =================== Part 8: Training NN ===================
  112. % You have now implemented all the code necessary to train a neural
  113. % network. To train your neural network, we will now use "fmincg", which
  114. % is a function which works similarly to "fminunc". Recall that these
  115. % advanced optimizers are able to train our cost functions efficiently as
  116. % long as we provide them with the gradient computations.
  117. %
  118. fprintf('\nTraining Neural Network... \n')
  119.  
  120. % After you have completed the assignment, change the MaxIter to a larger
  121. % value to see how more training helps.
  122. options = optimset('MaxIter', 50);
  123.  
  124. % You should also try different values of lambda
  125. lambda = 1;
  126.  
  127. % Create "short hand" for the cost function to be minimized
  128. costFunction = @(p) nnCostFunction(p, ...
  129. input_layer_size, ...
  130. hidden_layer_size, ...
  131. num_labels, X, y, lambda);
  132.  
  133. % Now, costFunction is a function that takes in only one argument (the
  134. % neural network parameters)
  135. [nn_params, cost] = fmincg(costFunction, initial_nn_params, options);
  136.  
  137. % Obtain Theta1 and Theta2 back from nn_params
  138. Theta1 = reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
  139. hidden_layer_size, (input_layer_size + 1));
  140.  
  141. Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
  142. num_labels, (hidden_layer_size + 1));
  143.  
  144. fprintf('Program paused. Press enter to continue.\n');
  145. pause;
  146.  
  147. %% ================= Part 9: Visualize Weights =================
  148. % You can now "visualize" what the neural network is learning by
  149. % displaying the hidden units to see what features they are capturing in
  150. % the data.
  151.  
  152. fprintf('\nVisualizing Neural Network... \n')
  153.  
  154. displayData(Theta1(:, 2:end));
  155.  
  156. fprintf('\nProgram paused. Press enter to continue.\n');
  157. pause;
  158.  
  159. %% ================= Part 10: Implement Predict =================
  160. % After training the neural network, we would like to use it to predict
  161. % the labels. You will now implement the "predict" function to use the
  162. % neural network to predict the labels of the training set. This lets
  163. % you compute the training set accuracy.
  164.  
  165. pred = predict(Theta1, Theta2, X);
  166.  
  167. fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * 100);

  

机器学习-反向传播算法(BP)代码实现(matlab)的更多相关文章

  1. 【机器学习】反向传播算法 BP

    知识回顾 1:首先引入一些便于稍后讨论的新标记方法: 假设神经网络的训练样本有m个,每个包含一组输入x和一组输出信号y,L表示神经网络的层数,S表示每层输入的神经元的个数,SL代表最后一层中处理的单元 ...

  2. 神经网络与机器学习 笔记—反向传播算法(BP)

    先看下面信号流图,L=2和M0=M1=M2=M3=3的情况,上面是前向通过,下面部分是反向通过. 1.初始化.假设没有先验知识可用,可以以一个一致分布来随机的挑选突触权值和阈值,这个分布选择为均值等于 ...

  3. 深度神经网络(DNN)反向传播算法(BP)

    在深度神经网络(DNN)模型与前向传播算法中,我们对DNN的模型和前向传播算法做了总结,这里我们更进一步,对DNN的反向传播算法(Back Propagation,BP)做一个总结. 1. DNN反向 ...

  4. 卷积神经网络(CNN)反向传播算法

    在卷积神经网络(CNN)前向传播算法中,我们对CNN的前向传播算法做了总结,基于CNN前向传播算法的基础,我们下面就对CNN的反向传播算法做一个总结.在阅读本文前,建议先研究DNN的反向传播算法:深度 ...

  5. 机器学习 —— 基础整理(七)前馈神经网络的BP反向传播算法步骤整理

    这里把按 [1] 推导的BP算法(Backpropagation)步骤整理一下.突然想整理这个的原因是知乎上看到了一个帅呆了的求矩阵微分的方法(也就是 [2]),不得不感叹作者的功力.[1] 中直接使 ...

  6. 深度学习之反向传播算法(BP)代码实现

    反向传播算法实战 本文仅仅是反向传播算法的实现,不涉及公式推导,如果对反向传播算法公式推导不熟悉,强烈建议查看另一篇文章神经网络之反向传播算法(BP)公式推导(超详细) 我们将实现一个 4 层的全连接 ...

  7. 反向传播(BP)算法理解以及Python实现

    全文参考<机器学习>-周志华中的5.3节-误差逆传播算法:整体思路一致,叙述方式有所不同: 使用如上图所示的三层网络来讲述反向传播算法: 首先需要明确一些概念, 假设数据集\(X=\{x^ ...

  8. 神经网络训练中的Tricks之高效BP(反向传播算法)

    神经网络训练中的Tricks之高效BP(反向传播算法) 神经网络训练中的Tricks之高效BP(反向传播算法) zouxy09@qq.com http://blog.csdn.net/zouxy09 ...

  9. 稀疏自动编码之反向传播算法(BP)

    假设给定m个训练样本的训练集,用梯度下降法训练一个神经网络,对于单个训练样本(x,y),定义该样本的损失函数: 那么整个训练集的损失函数定义如下: 第一项是所有样本的方差的均值.第二项是一个归一化项( ...

随机推荐

  1. php数组长度怎么获取

    我们可以将元素添加到数组或从数组中删除元素,那么如果我们想要知道数组中存在的元素的总长度或总数,我们就可以使用count() 或sizeof函数. 下面我们就通过简单的示例,给大家介绍php获取数组长 ...

  2. NX二次开发-算法篇-判断找到两个数组里不相同的对象

    NX9+VS2012 #include <uf.h> #include <uf_curve.h> #include <uf_modl.h> #include < ...

  3. NX二次开发-UFUN结合NXOPEN开发_常用代码模板

    hpp //头文件 #include <NXOpen/Part.hxx> #include <NXOpen/PartCollection.hxx> #include <N ...

  4. Core Data could not fulfill a fault

    做项目的时候在iOS4系统遇到过这样一个crash,console显示的错误信息是"Core Data could not fulfill a fault". 字面意思是什么?&q ...

  5. Day 9 :初识函数

    Python函数:1.函数是组织好的,可重复使用的,用来实现单一,或相关联功能的代码段. 2.函数能提高应用的模块性,和代码的重复利用率. Python提供了许多内建函数,比如print().但你也可 ...

  6. MyBatis中使用RowBounds对查询结果集进行分页

    MyBatis可以使用RowBounds逐页加载表数据.RowBounds对象可以使用offset和limit参数来构建.参数offset表示开始位置,而limit表示要取的记录的数目 映射文件: & ...

  7. python中字典排序

    一.Python的排序 1.reversed() 这个很好理解,reversed英文意思就是:adj. 颠倒的:相反的:(判决等)撤销的 print list(reversed(['dream','a ...

  8. java_函数式编程写法

    package cn.aikang.Test; import org.junit.Test; import java.util.Scanner; import java.util.function.S ...

  9. Docker学习のDocker的简单应用

    一.常见基本docker命令 docker是在一个linux虚拟机上运行的(对于windows来说),打开Docker quickStart terminal,就连街上了docker的 daemon ...

  10. 2018-2-13-win10-uwp-上传Nuget-让别人用我们的库

    title author date CreateTime categories win10 uwp 上传Nuget 让别人用我们的库 lindexi 2018-2-13 17:23:3 +0800 2 ...