背景:跟上一讲一样,识别手写数字,给一组数据集ex4data1.mat,,每个样例都为灰度化为20*20像素,也就是每个样例的维度为400,加载这组数据后,我们会有5000*400的矩阵X(5000个样例),5000*1的矩阵y(表示每个样例所代表的数据)。现在让你拟合出一个模型,使得这个模型能很好的预测其它手写的数字。

(注意:我们用10代表0(矩阵y也是这样),因为Octave的矩阵没有0行)

一:神经网络( Neural Networks)

  神经网络脚本ex4.m:

%% Machine Learning Online Class - Exercise  Neural Network Learning

%  Instructions
% ------------
%
% This file contains code that helps you get started on the
% linear exercise. You will need to complete the following functions
% in this exericse:
%
% sigmoidGradient.m
% randInitializeWeights.m
% nnCostFunction.m
%
% For this exercise, you will not need to change any code in this file,
% or any other files other than those mentioned above.
% %% Initialization
clear ; close all; clc %% Setup the parameters you will use for this exercise
input_layer_size = ; % 20x20 Input Images of Digits
hidden_layer_size = ; % hidden units
num_labels = ; % labels, from to
% (note that we have mapped "" to label ) %% =========== Part : Loading and Visualizing Data =============
% We start the exercise by first loading and visualizing the dataset.
% You will be working with a dataset that contains handwritten digits.
% % Load Training Data
fprintf('Loading and Visualizing Data ...\n') load('ex4data1.mat');
m = size(X, ); % Randomly select data points to display
sel = randperm(size(X, ));
sel = sel(:); displayData(X(sel, :)); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Loading Parameters ================
% In this part of the exercise, we load some pre-initialized
% neural network parameters. fprintf('\nLoading Saved Neural Network Parameters ...\n') % Load the weights into variables Theta1(25x401) and Theta2(10x26)
load('ex4weights.mat'); % Unroll parameters
nn_params = [Theta1(:) ; Theta2(:)]; %% ================ Part : Compute Cost (Feedforward) ================
% To the neural network, you should first start by implementing the
% feedforward part of the neural network that returns the cost only. You
% should complete the code in nnCostFunction.m to return cost. After
% implementing the feedforward to compute the cost, you can verify that
% your implementation is correct by verifying that you get the same cost
% as us for the fixed debugging parameters.
%
% We suggest implementing the feedforward cost *without* regularization
% first so that it will be easier for you to debug. Later, in part , you
% will get to implement the regularized cost.
%
fprintf('\nFeedforward Using Neural Network ...\n') % Weight regularization parameter (we set this to here).
lambda = ; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.287629)\n'], J); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part : Implement Regularization ===============
% Once your cost function implementation is correct, you should now
% continue to implement the regularization with the cost.
% fprintf('\nChecking Cost Function (w/ Regularization) ... \n') % Weight regularization parameter (we set this to here).
lambda = ; J = nnCostFunction(nn_params, input_layer_size, hidden_layer_size, ...
num_labels, X, y, lambda); fprintf(['Cost at parameters (loaded from ex4weights): %f '...
'\n(this value should be about 0.383770)\n'], J); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Sigmoid Gradient ================
% Before you start implementing the neural network, you will first
% implement the gradient for the sigmoid function. You should complete the
% code in the sigmoidGradient.m file.
% fprintf('\nEvaluating sigmoid gradient...\n') g = sigmoidGradient([- -0.5 0.5 ]);
fprintf('Sigmoid gradient evaluated at [-1 -0.5 0 0.5 1]:\n ');
fprintf('%f ', g);
fprintf('\n\n'); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================ Part : Initializing Pameters ================
% In this part of the exercise, you will be starting to implment a two
% layer neural network that classifies digits. You will start by
% implementing a function to initialize the weights of the neural network
% (randInitializeWeights.m) fprintf('\nInitializing Neural Network Parameters ...\n') initial_Theta1 = randInitializeWeights(input_layer_size, hidden_layer_size);
initial_Theta2 = randInitializeWeights(hidden_layer_size, num_labels); % Unroll parameters
initial_nn_params = [initial_Theta1(:) ; initial_Theta2(:)]; %% =============== Part : Implement Backpropagation ===============
% Once your cost matches up with ours, you should proceed to implement the
% backpropagation algorithm for the neural network. You should add to the
% code you've written in nnCostFunction.m to return the partial
% derivatives of the parameters.
%
fprintf('\nChecking Backpropagation... \n'); % Check gradients by running checkNNGradients
checkNNGradients; fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% =============== Part : Implement Regularization ===============
% Once your backpropagation implementation is correct, you should now
% continue to implement the regularization with the cost and gradient.
% fprintf('\nChecking Backpropagation (w/ Regularization) ... \n') % Check gradients by running checkNNGradients
lambda = ;
checkNNGradients(lambda); % Also output the costFunction debugging values
debug_J = nnCostFunction(nn_params, input_layer_size, ...
hidden_layer_size, num_labels, X, y, lambda); fprintf(['\n\nCost at (fixed) debugging parameters (w/ lambda = %f): %f ' ...
'\n(for lambda = 3, this value should be about 0.576051)\n\n'], lambda, debug_J); fprintf('Program paused. Press enter to continue.\n');
pause; %% =================== Part : Training NN ===================
% You have now implemented all the code necessary to train a neural
% network. To train your neural network, we will now use "fmincg", which
% is a function which works similarly to "fminunc". Recall that these
% advanced optimizers are able to train our cost functions efficiently as
% long as we provide them with the gradient computations.
%
fprintf('\nTraining Neural Network... \n') % After you have completed the assignment, change the MaxIter to a larger
% value to see how more training helps.
options = optimset('MaxIter', ); % You should also try different values of lambda
lambda = ; % Create "short hand" for the cost function to be minimized
costFunction = @(p) nnCostFunction(p, ...
input_layer_size, ...
hidden_layer_size, ...
num_labels, X, y, lambda); % Now, costFunction is a function that takes in only one argument (the
% neural network parameters)
[nn_params, cost] = fmincg(costFunction, initial_nn_params, options); % Obtain Theta1 and Theta2 back from nn_params
Theta1 = reshape(nn_params(:hidden_layer_size * (input_layer_size + )), ...
hidden_layer_size, (input_layer_size + )); Theta2 = reshape(nn_params(( + (hidden_layer_size * (input_layer_size + ))):end), ...
num_labels, (hidden_layer_size + )); fprintf('Program paused. Press enter to continue.\n');
pause; %% ================= Part : Visualize Weights =================
% You can now "visualize" what the neural network is learning by
% displaying the hidden units to see what features they are capturing in
% the data. fprintf('\nVisualizing Neural Network... \n') displayData(Theta1(:, :end)); fprintf('\nProgram paused. Press enter to continue.\n');
pause; %% ================= Part : Implement Predict =================
% After training the neural network, we would like to use it to predict
% the labels. You will now implement the "predict" function to use the
% neural network to predict the labels of the training set. This lets
% you compute the training set accuracy. pred = predict(Theta1, Theta2, X); fprintf('\nTraining Set Accuracy: %f\n', mean(double(pred == y)) * );

ex4.m

  1,通过可视化数据,可以看到如下图所示:

  2,前向传播代价函数(Feedforward and cost function)

  

$J(\Theta)=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{K}[y^{(i)}_k(log(h_\Theta(x^{(i)}))_k)+(1-y^{(i)}_k)log(1-(h_{\Theta}(x^{(i)}))_k)]$

$+\frac{\lambda }{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_l+1}(\Theta_{ji}^{l})^{2}$

注意:$(h_\Theta(x^{(i)}))_k=a^{(3)}_k$,第k个输出单元。

该代价函数正则化时忽略偏差项,最里层的循环$

Andrew Ng机器学习 四:Neural Networks Learning的更多相关文章

  1. (原创)Stanford Machine Learning (by Andrew NG) --- (week 5) Neural Networks Learning

    本栏目内容来自Andrew NG老师的公开课:https://class.coursera.org/ml/class/index 一般而言, 人工神经网络与经典计算方法相比并非优越, 只有当常规方法解 ...

  2. (原创)Stanford Machine Learning (by Andrew NG) --- (week 4) Neural Networks Representation

    Andrew NG的Machine learning课程地址为:https://www.coursera.org/course/ml 神经网络一直被认为是比较难懂的问题,NG将神经网络部分的课程分为了 ...

  3. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  4. 斯坦福大学公开课机器学习: neural networks learning - autonomous driving example(通过神经网络实现自动驾驶实例)

    使用神经网络来实现自动驾驶,也就是说使汽车通过学习来自己驾驶. 下图是通过神经网络学习实现自动驾驶的图例讲解: 左下角是汽车所看到的前方的路况图像.左上图,可以看到一条水平的菜单栏(数字4所指示方向) ...

  5. 【原】Coursera—Andrew Ng机器学习—课程笔记 Lecture 9_Neural Networks learning

    神经网络的学习(Neural Networks: Learning) 9.1 代价函数 Cost Function 参考视频: 9 - 1 - Cost Function (7 min).mkv 假设 ...

  6. Andrew Ng机器学习课程笔记(四)之神经网络

    Andrew Ng机器学习课程笔记(四)之神经网络 版权声明:本文为博主原创文章,转载请指明转载地址 http://www.cnblogs.com/fydeblog/p/7365730.html 前言 ...

  7. Andrew Ng机器学习课程11之使用machine learning的建议

    Andrew Ng机器学习课程11之使用machine learning的建议 声明:引用请注明出处http://blog.csdn.net/lg1259156776/ 2015-9-28 艺少

  8. 【原】Coursera—Andrew Ng机器学习—编程作业 Programming Exercise 4—反向传播神经网络

    课程笔记 Coursera—Andrew Ng机器学习—课程笔记 Lecture 9_Neural Networks learning 作业说明 Exercise 4,Week 5,实现反向传播 ba ...

  9. Machine Learning - 第5周(Neural Networks: Learning)

    The Neural Network is one of the most powerful learning algorithms (when a linear classifier doesn't ...

随机推荐

  1. Linux故障排查之CPU占用率过高

    有时候我们可能会遇到CPU一直占用过高的情况.之前我的做法是,直接查找到相关的进程,然后杀死或重启即可.这个方法对于一般的应用问题还不大,但是要是是重要的环境的话,可万万使不得. 如果是重要的环境,那 ...

  2. [LeetCode] 167. Two Sum II - Input array is sorted 两数和 II - 输入是有序的数组

    Given an array of integers that is already sorted in ascending order, find two numbers such that the ...

  3. 【Python学习之二】Python基础语法

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 python3.6 一.Python的注释及乱码1.单行注释:以#开头 ...

  4. LumiSoft 邮件操作删除(无法删除解决方法)

    最近在用 LumiSoft  进行邮件读取,然后操作相关附件邮件使用的是qq邮箱,读取后进行移除,但是怎么都移除不了 后来咨询了官方客服,原来是设置不对 需要 取消掉 X禁止收信软件删信 (仅对 PO ...

  5. Jmeter3.1 使用及新增报告功能

    一.JMeter官网 下载地址http://jmeter.apache.org/download_jmeter.cgi Jmeter wikihttps://wiki.apache.org/jmete ...

  6. leetcode309 买卖股票

    一.穷举框架 首先,还是一样的思路:如何穷举?这里的穷举思路和上篇文章递归的思想不太一样. 递归其实是符合我们思考的逻辑的,一步步推进,遇到无法解决的就丢给递归,一不小心就做出来了,可读性还很好.缺点 ...

  7. KMP操作大全与kuangbin kmp套题题解

    先搬运,比赛后整理 https://blog.csdn.net/vaeloverforever/article/details/82024957

  8. TCP报文格式+UDP报文格式+MAC帧格式

    TCP和UDP的区别: 1)TCP是面向连接的,而UDP是无连接的 2)TCP提供可靠服务,而UDP不提供可靠服务,只是尽最大努力交付报文 3)TCP面向字节流,TCP把数据看成一串无结构的字节流,而 ...

  9. 自己实现简单版的注解Mybatis

    Mybatis属于ORM(Object Relational Mapping)框架,将java对象和关系型数据库建立映射关系,方便对数据库进行操作,其底层还是对jdbc的封装. 实现的思路是: 1 定 ...

  10. Wireshark 抓包过滤器学习

    Wireshark 抓包过滤器学习 wireshark中,分为两种过滤器:捕获过滤器 和 显示过滤器 捕获过滤器 是指wireshark一开始在抓包时,就确定要抓取哪些类型的包:对于不需要的,不进行抓 ...