ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征)

ufldl出了新教程,感觉比之前的好,从基础讲起。系统清晰。又有编程实践。

在deep learning高质量群里面听一些前辈说。不必深究其它机器学习的算法。能够直接来学dl。

于是近期就開始搞这个了。教程加上matlab编程,就是完美啊。

新教程的地址是:http://ufldl.stanford.edu/tutorial/



学习链接:
http://ufldl.stanford.edu/tutorial/supervised/FeatureExtractionUsingConvolution/
http://ufldl.stanford.edu/tutorial/supervised/Pooling/
http://ufldl.stanford.edu/tutorial/supervised/ExerciseConvolutionAndPooling/


卷积:用了matlab的conv2函数,这里用的有点挫。由于conv2算的是数学意义上的卷积,函数内部会把filter做180翻转。
而其实我们不是想算数学意义上的卷积。仅仅是简单算 “内积”,点对点相乘再求和。所以,我们得先把filter翻转,再传给conv2,就达到我们目的了。
其实。我想。其实。反不反转。并不影响终于的结果的。由于毕竟W是调整出来的。

池化:这里池化的步长,跟poolDim相等,不会交叉。

这里用了conv2来算均值,能够优化性能。

记得。这里不须要激活函数。!!


这次练习较为简单。

只是几个matlab函数还是得简单总结一下:

conv2求卷积
squeeze把仅仅有一个维度的那一维给去掉
rot90做90度旋转
reshape维度变换


执行结果:

这里练习主要是检測写的两个函数是否正确。


以下是主要代码:

cnnConvolve.m
function convolvedFeatures = cnnConvolve(filterDim, numFilters, images, W, b)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
% filterDim - filter (feature) dimension
% numFilters - number of feature maps
% images - large images to convolve with, matrix in the form
% images(r, c, image number) % -------------注意维度的位置
% W, b - W, b for features from the sparse autoencoder
% W is of shape (filterDim,filterDim,numFilters)
% b is of shape (numFilters,1)
%
% Returns:
% convolvedFeatures - matrix of convolved features in the form
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum) % ----------注意维度的位置 numImages = size(images, 3);
imageDim = size(images, 1); %行数,即是高度。 这里没算宽度,貌似默认高宽相等。
convDim = imageDim - filterDim + 1; % 卷积后,特征的高度 convolvedFeatures = zeros(convDim, convDim, numFilters, numImages); % Instructions:
% Convolve every filter with every image here to produce the
% (imageDim - filterDim + 1) x (imageDim - filterDim + 1) x numFeatures x numImages
% matrix convolvedFeatures, such that
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum) is the
% value of the convolved featureNum feature for the imageNum image over
% the region (imageRow, imageCol) to (imageRow + filterDim - 1, imageCol + filterDim - 1)
%
% Expected running times:
% Convolving with 100 images should take less than 30 seconds
% Convolving with 5000 images should take around 2 minutes
% (So to save time when testing, you should convolve with less images, as
% described earlier) for imageNum = 1:numImages
for filterNum = 1:numFilters % convolution of image with feature matrix
convolvedImage = zeros(convDim, convDim); % Obtain the feature (filterDim x filterDim) needed during the convolution %%% YOUR CODE HERE %%%
filter = W(:,:,filterNum); % Flip the feature matrix because of the definition of convolution, as explained later
filter = rot90(squeeze(filter),2); %squeeze是把仅仅有一个维度的那一维给去掉。 这里就是把第三维给去掉,三维变二维。 % Obtain the image
im = squeeze(images(:, :, imageNum)); % Convolve "filter" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution %%% YOUR CODE HERE %%%
convolvedImage =conv2(im, filter,"valid");%加上valid參数,以下代码不要了。 %conv2Dim = size(convolvedImage,1);
%im_start = (conv2Dim - convDim+2)/2;
%im_end = im_start+convDim-1;
%convolvedImage = convolvedImage(im_start:im_end,im_start:im_end);%取中间部分 % Add the bias unit
% Then, apply the sigmoid function to get the hidden activation %%% YOUR CODE HERE %%%
convolvedImage = convolvedImage.+b(filterNum);
convolvedImage = sigmoid(convolvedImage);
convolvedImage = reshape(convolvedImage,convDim, convDim, 1, 1);%2维变维4维 convolvedFeatures(:, :, filterNum, imageNum) = convolvedImage;
end
end end



cnnPool.m
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
% poolDim - dimension of pooling region
% convolvedFeatures - convolved features to pool (as given by cnnConvolve)
% convolvedFeatures(imageRow, imageCol, featureNum, imageNum)
%
% Returns:
% pooledFeatures - matrix of pooled features in the form
% pooledFeatures(poolRow, poolCol, featureNum, imageNum)
% numImages = size(convolvedFeatures, 4);
numFilters = size(convolvedFeatures, 3);
convolvedDim = size(convolvedFeatures, 1); pooledFeatures = zeros(convolvedDim / poolDim, ...
convolvedDim / poolDim, numFilters, numImages); % Instructions:
% Now pool the convolved features in regions of poolDim x poolDim,
% to obtain the
% (convolvedDim/poolDim) x (convolvedDim/poolDim) x numFeatures x numImages
% matrix pooledFeatures, such that
% pooledFeatures(poolRow, poolCol, featureNum, imageNum) is the
% value of the featureNum feature for the imageNum image pooled over the
% corresponding (poolRow, poolCol) pooling region.
%
% Use mean pooling here. %%% YOUR CODE HERE %%%
filter = ones(poolDim);
for imageNum=1:numImages
for filterNum=1:numFilters
im = squeeze(squeeze(convolvedFeatures(:, :,filterNum,imageNum)));%貌似squeeze不要也能够
pooledImage =conv2(im, filter,"valid");
pooledImage = pooledImage(1:poolDim:end,1:poolDim:end);%取中间部分
pooledImage = pooledImage./(poolDim*poolDim); %pooledImage = sigmoid(pooledImage); %不须要sigmoid
pooledImage = reshape(pooledImage,convolvedDim / poolDim, convolvedDim / poolDim, 1, 1);%2维变维4维 pooledFeatures(:, :, filterNum, imageNum) = pooledImage;
end
end end


本文作者:linger
本文链接:http://blog.csdn.net/lingerlanlan/article/details/38502627










版权声明:本文博客原创文章。博客,未经同意,不得转载。

ufldl学习笔记和编程作业:Feature Extraction Using Convolution,Pooling(卷积和汇集特征提取)的更多相关文章

  1. ufldl学习笔记和编程作业:Softmax Regression(softmax回报)

    ufldl学习笔记与编程作业:Softmax Regression(softmax回归) ufldl出了新教程.感觉比之前的好,从基础讲起.系统清晰,又有编程实践. 在deep learning高质量 ...

  2. ufldl学习笔记与编程作业:Softmax Regression(vectorization加速)

    ufldl学习笔记与编程作业:Softmax Regression(vectorization加速) ufldl出了新教程,感觉比之前的好.从基础讲起.系统清晰,又有编程实践. 在deep learn ...

  3. ufldl学习笔记与编程作业:Multi-Layer Neural Network(多层神经网络+识别手写体编程)

    ufldl学习笔记与编程作业:Multi-Layer Neural Network(多层神经网络+识别手写体编程) ufldl出了新教程,感觉比之前的好,从基础讲起,系统清晰,又有编程实践. 在dee ...

  4. ufldl学习笔记与编程作业:Logistic Regression(逻辑回归)

    ufldl学习笔记与编程作业:Logistic Regression(逻辑回归) ufldl出了新教程,感觉比之前的好,从基础讲起.系统清晰,又有编程实践. 在deep learning高质量群里面听 ...

  5. ufldl学习笔记与编程作业:Linear Regression(线性回归)

    ufldl学习笔记与编程作业:Linear Regression(线性回归) ufldl出了新教程,感觉比之前的好.从基础讲起.系统清晰,又有编程实践. 在deep learning高质量群里面听一些 ...

  6. 我的学习笔记_Windows_HOOK编程 2009-12-03 11:19

    一.什么是HOOK? "hook"这个单词的意思是"钩子","Windows Hook"是Windows消息处理机制的一个重要扩展,程序猿能 ...

  7. 大数据学习笔记——Hadoop编程实战之Mapreduce

    Hadoop编程实战——Mapreduce基本功能实现 此篇博客承接上一篇总结的HDFS编程实战,将会详细地对mapreduce的各种数据分析功能进行一个整理,由于实际工作中并不会过多地涉及原理,因此 ...

  8. 大数据学习笔记——Hadoop编程实战之HDFS

    HDFS基本API的应用(包含IDEA的基本设置) 在上一篇博客中,本人详细地整理了如何从0搭建一个HA模式下的分布式Hadoop平台,那么,在上一篇的基础上,我们终于可以进行编程实操了,同样,在编程 ...

  9. 学习笔记之编程珠玑 Programming Pearls

    Programming Pearls (2nd Edition): Jon Bentley: 0785342657883: Amazon.com: Books https://www.amazon.c ...

随机推荐

  1. CodeForces 543A - Writing Code DP 完全背包

    有n个程序,这n个程序运作产生m行代码,但是每个程序产生的BUG总和不能超过b, 给出每个程序产生的代码,每行会产生ai个BUG,问在总BUG不超过b的情况下, 我们有几种选择方法思路:看懂了题意之后 ...

  2. 【集训笔记】贪心算法【HDOJ1052 【HDOJ2037

    FatMouse' Trade Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others)T ...

  3. Ubuntu服务器上SSH Server 的安装和设置

    网上有很多介绍在Ubuntu下开启SSH服务的文章,但大多数介绍的方法测试后都不太理想,均不能实现远程登录到Ubuntu上,最后分析原因是都没有真正开启ssh-server服务.最终成功的方法如下: ...

  4. linux下ip命令用法

    配置数据转发,可以通过 1.路由转发即用用路由器实现: 2.使用NAT转发: 简单的说: 路由表内的信息只是指定数据包在路由器内的下一个去处.并不能改变数据包本身的地址信息.即它只是“换条路而已,目的 ...

  5. 【转】[Mysql] Linux Mysql 日志专题

    原文链接:http://blog.csdn.net/xiaoxu0123/article/details/6258538 1, 设置存放的目录: [root@Linux etc]# more /etc ...

  6. malloc & free

    动态分配内存 #include<stdio.h> #include<stdlib.h> int compare_integers(void const *a,void cons ...

  7. HDOJ 1384 差分约束

    结题报告合集请戳:http://972169909-qq-com.iteye.com/blog/1185527 /*题意:求符合题意的最小集合的元素个数 题目要求的是求的最短路, 则对于 不等式 f( ...

  8. JVM调优总结(三)-基本垃圾回收算法

    可以从不同的的角度去划分垃圾回收算法: 按照基本回收策略分 引用计数(Reference Counting): 比较古老的回收算法.原理是此对象有一个引用,即增加一个计数,删除一个引用则减少一个计数. ...

  9. JVM调优总结(二)-一些概念

    Java对象的大小 基本数据的类型的大小是固定的,这里就不多说了.对于非基本类型的Java对象,其大小就值得商榷. 在Java中,一个空Object对象的大小是8byte,这个大小只是保存堆中一个没有 ...

  10. ESRI Shapefiles (SHP)

    ESRI Shapefiles (SHP) Also known as ESRI ArcView Shapefiles or ESRI Shapefiles. ESRI is the company ...