背景:跟上一讲一样,识别手写数字,给一组数据集ex4data1.mat,,每个样例都为灰度化为20*20像素,也就是每个样例的维度为400,加载这组数据后,我们会有5000*400的矩阵X(5000个样例),5000*1的矩阵y(表示每个样例所代表的数据)。现在让你拟合出一个模型,使得这个模型能很好的预测其它手写的数字。

(注意:我们用10代表0(矩阵y也是这样),因为Octave的矩阵没有0行)

一:神经网络( Neural Networks)

  神经网络脚本ex4.m:

%% Machine Learning Online Class - Exercise  Neural Network Learning

%  Instructions
% ------------
%
% This file contains code that helps you get started on the
% linear exercise. You will need to complete the following functions
% in this exericse:
%
% sigmoidGradient.m
% randInitializeWeights.m
% nnCostFunction.m
%
% For this exercise, you will not need to change any code in this file,
% or any other files other than those mentioned above.
% %% Initialization
clear ; close all; clc %% Setup the parameters you will use for this exercise
input_layer_size = ; % 20x20 Input Images of Digits
hidden_layer_size = ; % hidden units
num_labels = ; % labels, from to
% (note that we have mapped "" to label ) %% =========== Part : Loading and Visualizing Data =============
% We start the exercise by first loading and visualizing the dataset.
% You will be working with a dataset that contains handwritten digits.
% % Load Training Data
fprintf('Loading and Visualizing Data ...\n') load('ex4data1.mat');
m = size(X, ); % Randomly select data points to display
sel = randperm(size(X, ));
sel = sel(:); displayData(X(sel, :)); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Loading Parameters ================
% In this part of the exercise, we load some pre-initialized
% neural network parameters. fprintf('\nLoading Saved Neural Network Parameters ...\n') % Load the weights into variables Theta1(25x401) and Theta2(10x26)
load('ex4weights.mat'); % Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)]; %% ================ Part : Compute Cost (Feedforward) ================
% To the neural network, you should first start by implementing the
% feedforward part of the neural network that returns the cost only. You
% should complete the code in nnCostFunction.m to return cost. After
% implementing the feedforward to compute the cost, you can verify that
% your implementation is correct by verifying that you get the same cost
% as us for the fixed debugging parameters.
%
% We suggest implementing the feedforward cost *without* regularization
% first so that it will be easier for you to debug. Later, in part , you
% will get to implement the regularized cost.
%
fprintf('\nFeedforward Using Neural Network ...\n') % Weight regularization parameter (we set this to here).
lambda = ; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.287629)\n'], J); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part : Implement Regularization ===============
% Once your cost function implementation is correct, you should now
% continue to implement the regularization with the cost.
% fprintf('\nChecking Cost Function (w/ Regularization) ... \n') % Weight regularization parameter (we set this to here).
lambda = ; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.383770)\n'], J); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Sigmoid Gradient ================
% Before you start implementing the neural network, you will first
% implement the gradient for the sigmoid function. You should complete the
% code in the sigmoidGradient.m file.
% fprintf('\nEvaluating sigmoid gradient...\n') g = sigmoidGradient([- -0.5 0.5 ]);
fprintf('Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:\n ');
fprintf('%f ', g);
fprintf('\n\n'); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Initializing Pameters ================
% In this part of the exercise, you will be starting to implment a two
% layer neural network that classifies digits. You will start by
% implementing a function to initialize the weights of the neural network
% (randInitializeWeights.m) fprintf('\nInitializing Neural Network Parameters ...\n') initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels); % Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)]; %% =============== Part : Implement Backpropagation ===============
% Once your cost matches up with ours, you should proceed to implement the
% backpropagation algorithm for the neural network. You should add to the
% code you've written in nnCostFunction.m to return the partial
% derivatives of the parameters.
%
fprintf('\nChecking Backpropagation... \n'); % Check gradients by running checkNNGradients
checkNNGradients; fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part : Implement Regularization ===============
% Once your backpropagation implementation is correct, you should now
% continue to implement the regularization with the cost and gradient.
% fprintf('\nChecking Backpropagation (w/ Regularization) ... \n') % Check gradients by running checkNNGradients
lambda = ;
checkNNGradients(lambda); % Also output the costFunction debugging values
debug_J = nnCostFunction(nn_params, input_layer_size, ...
hidden_layer_size, num_labels, X, y, lambda); fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = %f): %f ' ...
'\n(for lambda = 3, this value should be about 0.576051)\n\n'], lambda, debug_J); fprintf('Program paused. Press enter to continue.\n');
pause; %% =================== Part : Training NN ===================
% You have now implemented all the code necessary to train a neural
% network. To train your neural network, we will now use "fmincg", which
% is a function which works similarly to "fminunc". Recall that these
% advanced optimizers are able to train our cost functions efficiently as
% long as we provide them with the gradient computations.
%
fprintf('\nTraining Neural Network... \n') % After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', ); % You should also try different values of lambda
lambda = ; % Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, X, y, lambda); % Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options); % Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(:hidden_layer_size * (input_layer_size + )), ...
hidden_layer_size, (input_layer_size + )); Theta2 = reshape(nn_params(( + (hidden_layer_size * (input_layer_size + ))):end), ...
num_labels, (hidden_layer_size + )); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================= Part : Visualize Weights =================
% You can now "visualize" what the neural network is learning by
% displaying the hidden units to see what features they are capturing in
% the data. fprintf('\nVisualizing Neural Network... \n') displayData(Theta1(:, :end)); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% ================= Part : Implement Predict =================
% After training the neural network, we would like to use it to predict
% the labels. You will now implement the "predict" function to use the
% neural network to predict the labels of the training set. This lets
% you compute the training set accuracy. pred = predict(Theta1, Theta2, X); fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * );

ex4.m

  1,通过可视化数据,可以看到如下图所示:

  2,前向传播代价函数(Feedforward and cost function)

  

$J(\Theta)=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}[y^{(i)}_k(log(h_\Theta(x^{(i)}))_k)+(1-y^{(i)}_k)log(1-(h_{\Theta}(x^{(i)}))_k)]$

$+\frac{\lambda }{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_l+1}(\Theta_{ji}^{l})^{2}$

注意:$(h_\Theta(x^{(i)}))_k=a^{(3)}_k$,第k个输出单元。

该代价函数正则化时忽略偏差项,最里层的循环$

Andrew Ng机器学习 四:Neural Networks Learning的更多相关文章

  1. (原创)Stanford Machine Learning (by Andrew NG) --- (week 5) Neural Networks Learning

    本栏目内容来自Andrew NG老师的公开课:https://class.coursera.org/ml/class/index 一般而言, 人工神经网络与经典计算方法相比并非优越, 只有当常规方法解 ...

  2. (原创)Stanford Machine Learning (by Andrew NG) --- (week 4) Neural Networks Representation

    Andrew NG的Machine learning课程地址为:https://www.coursera.org/course/ml 神经网络一直被认为是比较难懂的问题,NG将神经网络部分的课程分为了 ...

  3. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  4. 斯坦福大学公开课机器学习: neural networks learning - autonomous driving example(通过神经网络实现自动驾驶实例)

    使用神经网络来实现自动驾驶,也就是说使汽车通过学习来自己驾驶. 下图是通过神经网络学习实现自动驾驶的图例讲解: 左下角是汽车所看到的前方的路况图像.左上图,可以看到一条水平的菜单栏(数字4所指示方向) ...

  5. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 9_Neural Networks learning

    神经网络的学习(Neural Networks: Learning) 9.1 代价函数 Cost Function 参考视频: 9 - 1 - Cost Function (7 min).mkv 假设 ...

  6. Andrew Ng机器学习课程笔记(四)之神经网络

    Andrew Ng机器学习课程笔记(四)之神经网络 版权声明:本文为博主原创文章,转载请指明转载地址 http://www.cnblogs.com/fydeblog/p/7365730.html 前言 ...

  7. Andrew Ng机器学习课程11之使用machine learning的建议

    Andrew Ng机器学习课程11之使用machine learning的建议 声明:引用请注明出处http://blog.csdn.net/lg1259156776/ 2015-9-28 艺少

  8. 【原】Coursera—Andrew Ng机器学习—编程作业 Programming Exercise 4—反向传播神经网络

    课程笔记 Coursera—Andrew Ng机器学习—课程笔记 Lecture 9_Neural Networks learning 作业说明 Exercise 4,Week 5,实现反向传播 ba ...

  9. Machine Learning - 第5周(Neural Networks: Learning)

    The Neural Network is one of the most powerful learning algorithms (when a linear classifier doesn't ...

随机推荐

  1. [LeetCode] 140. Word Break II 单词拆分II

    Given a non-empty string s and a dictionary wordDict containing a list of non-empty words, add space ...

  2. [LeetCode] 802. Find Eventual Safe States 找到最终的安全状态

    In a directed graph, we start at some node and every turn, walk along a directed edge of the graph.  ...

  3. powershell字符串操作

    字符串操作是powershell中重要的一项操作,学会使用字符串操作的一些常用方法会大大提高脚本编写效率,以下列出几个经常用到的字符串操作方法: 前提:本人的powershell版本是 1.字符串格式 ...

  4. leetocode 207 课程表

    解题思路: 本题可约化为:课程安排图是否是 有向无环图(DAG).即课程间规定了前置条件,但不能构成任何环路,否则课程前置条件将不成立. 思路是通过 拓扑排序 判断此课程安排图是否是 有向无环图(DA ...

  5. windows server系统打印服务配置

    系统环境:windows server 2008 R2 Enterprise Service Pack 1 安装内存:8G 系统类型:64位操作系统 目标:在此系统上开启打印服务,可以添加网络打印机 ...

  6. Linux05 文件或目录的权限(ls、lsattr、chattr、chmod、chown、chgrp、file)

    一.查看文件或目录的权限:ls -al  文件名/目录名 keshengtao@LAPTOP-F9AFU4OK:~$ ls -al total drwxr-xr-x keshengtao keshen ...

  7. 2019秋季PAT甲级_C++题解

    2019 秋季 PAT (Advanced Level) C++题解 考试拿到了满分但受考场状态和知识水平所限可能方法不够简洁,此处保留记录,仍需多加学习.备考总结(笔记目录)在这里 7-1 Fore ...

  8. 微信小程序的登入与授权

    官方文档:https://developers.weixin.qq.com/miniprogram/dev/framework/open-ability/login.html 小程序登录 小程序可以通 ...

  9. hdu1016 Prime Ring Problem【素数环问题(经典dfs)】

    Prime Ring Problem Time Limit: 4000/2000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Other ...

  10. 【Centos】Centos7.5取消自动锁屏功能

    目录 00. 目录 01. 问题描述 02. 问题分析 03. 解决办法 04. 附录 00. 目录 @ 参考博客:[Centos]Centos7.5取消自动锁屏功能 01. 问题描述 Centos7 ...