Exercise:Convolution and Pooling

习题链接:Exercise:Convolution and Pooling

cnnExercise.m

%% CS294A/CS294W Convolutional Neural Networks Exercise

%  Instructions
% ------------
%
% This file contains code that helps you get started on the
% convolutional neural networks exercise. In this exercise, you will only
% need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
% this file. %%======================================================================
%% STEP : Initialization
% Here we initialize some parameters used for the exercise. imageDim = ; % image dimension
imageChannels = ; % number of channels (rgb, so ) patchDim = ; % patch dimension
numPatches = ; % number of patches visibleSize = patchDim * patchDim * imageChannels; % number of input units
outputSize = visibleSize; % number of output units
hiddenSize = ; % number of hidden units epsilon = 0.1; % epsilon for ZCA whitening poolDim = ; % dimension of pooling region %%======================================================================
%% STEP : Train a sparse autoencoder (with a linear decoder) to learn
% features from color patches. If you have completed the linear decoder
% execise, use the features that you have obtained from that exercise,
% loading them into optTheta. Recall that we have to keep around the
% parameters used in whitening (i.e., the ZCA whitening matrix and the
% meanPatch) % --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with
% the optimal parameters: optTheta = zeros(*hiddenSize*visibleSize+hiddenSize+visibleSize, );
ZCAWhite = zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, ); load STL10Features.mat % -------------------------------------------------------------------- % Display and check to see that the features look good
W = reshape(optTheta(:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize); displayColorNetwork( (W*ZCAWhite)'); %%======================================================================
%% STEP : Implement and test convolution and pooling
% In this step, you will implement convolution and pooling, and test them
% on a small part of the data set to ensure that you have implemented
% these two functions correctly. In the next step, you will actually
% convolve and pool the features with the STL10 images. %% STEP 2a: Implement convolution
% Implement convolution in the function cnnConvolve in cnnConvolve.m % Note that we have to preprocess the images in the exact same way
% we preprocessed the patches before we can obtain the feature activations. load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels %% Use only the first images for testing
convImages = trainImages(:, :, :, :); % NOTE: Implement cnnConvolve in cnnConvolve.m first!
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch); %% STEP 2b: Checking your convolution
% To ensure that you have convolved the features correctly, we have
% provided some code to compare the results of your convolution with
% activations from the sparse autoencoder % For random points
for i = :
featureNum = randi([, hiddenSize]);
imageNum = randi([, ]);
imageRow = randi([, imageDim - patchDim + ]);
imageCol = randi([, imageDim - patchDim + ]); patch = convImages(imageRow:imageRow + patchDim - , imageCol:imageCol + patchDim - , :, imageNum);
patch = patch(:);
patch = patch - meanPatch;
patch = ZCAWhite * patch; features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); if abs(features(featureNum, ) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-
fprintf('Convolved feature does not match activation from autoencoder\n');
fprintf('Feature Number : %d\n', featureNum);
fprintf('Image Number : %d\n', imageNum);
fprintf('Image Row : %d\n', imageRow);
fprintf('Image Column : %d\n', imageCol);
fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
fprintf('Sparse AE feature : %0.5f\n', features(featureNum, ));
error('Convolved feature does not match activation from autoencoder');
end
end disp('Congratulations! Your convolution code passed the test.'); %% STEP 2c: Implement pooling
% Implement pooling in the function cnnPool in cnnPool.m % NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures); %% STEP 2d: Checking your pooling
% To ensure that you have implemented pooling, we will use your pooling
% function to pool over a test matrix and check the results. testMatrix = reshape(:, , );
expectedMatrix = [mean(mean(testMatrix(:, :))) mean(mean(testMatrix(:, :))); ...
mean(mean(testMatrix(:, :))) mean(mean(testMatrix(:, :))); ]; testMatrix = reshape(testMatrix, , , , ); pooledFeatures = squeeze(cnnPool(, testMatrix)); if ~isequal(pooledFeatures, expectedMatrix)
disp('Pooling incorrect');
disp('Expected');
disp(expectedMatrix);
disp('Got');
disp(pooledFeatures);
else
disp('Congratulations! Your pooling code passed the test.');
end %%======================================================================
%% STEP : Convolve and pool with the dataset
% In this step, you will convolve each of the features you learned with
% the full large images to obtain the convolved features. You will then
% pool the convolved features to obtain the pooled features for
% classification.
%
% Because the convolved features matrix is very large, we will do the
% convolution and pooling features at a time to avoid running out of
% memory. Reduce this number if necessary stepSize = ;
assert(mod(hiddenSize, stepSize) == , 'stepSize should divide hiddenSize'); load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat % loads numTestImages, testImages, testLabels pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
floor((imageDim - patchDim + ) / poolDim), ...
floor((imageDim - patchDim + ) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
floor((imageDim - patchDim + ) / poolDim), ...
floor((imageDim - patchDim + ) / poolDim) ); tic(); for convPart = :(hiddenSize / stepSize) featureStart = (convPart - ) * stepSize + ;
featureEnd = convPart * stepSize; fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);
Wt = W(featureStart:featureEnd, :);
bt = b(featureStart:featureEnd); fprintf('Convolving and pooling train images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
trainImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis; fprintf('Convolving and pooling test images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
testImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc(); clear convolvedFeaturesThis pooledFeaturesThis; end % You might want to save the pooled features since convolution and pooling takes a long time
save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
toc(); %%======================================================================
%% STEP : Use pooled features for classification
% Now, you will use your pooled features to train a softmax classifier,
% using softmaxTrain from the softmax exercise.
% Training the softmax classifer for iterations should take less than
% minutes. % Add the path to your softmax solution, if necessary
% addpath /path/to/solution/ % Setup parameters for softmax
softmaxLambda = 1e-;
numClasses = ;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [ ]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
numTrainImages);
softmaxY = trainLabels; options = struct;
options.maxIter = ;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
numClasses, softmaxLambda, softmaxX, softmaxY, options); %%======================================================================
%% STEP : Test classifer
% Now you will test your trained classifer against the test images softmaxX = permute(pooledFeaturesTest, [ ]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels; [pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, );
fprintf('Accuracy: %2.3f%%\n', acc * ); % You should expect to get an accuracy of around % on the test images.

cnnConvolve.m

function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch)
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
% patchDim - patch (feature) dimension
% numFeatures - number of features
% images - large images to convolve with, matrix in the form
% images(r, c, channel, image number)
% W, b - W, b for features from the sparse autoencoder
% ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
% preprocessing
%
% Returns:
% convolvedFeatures - matrix of convolved features in the form
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol) numImages = size(images, );
imageDim = size(images, );
imageChannels = size(images, ); convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + , imageDim - patchDim + ); % Instructions:
% Convolve every feature with every large image here to produce the
% numFeatures x numImages x (imageDim - patchDim + ) x (imageDim - patchDim + )
% matrix convolvedFeatures, such that
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
% value of the convolved featureNum feature for the imageNum image over
% the region (imageRow, imageCol) to (imageRow + patchDim - , imageCol + patchDim - )
%
% Expected running times:
% Convolving with images should take less than minutes
% Convolving with images should take around an hour
% (So to save time when testing, you should convolve with less images, as
% described earlier) % -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps W = W * ZCAWhite;
b = b - W * meanPatch; % -------------------------------------------------------- convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + , imageDim - patchDim + );
for imageNum = :numImages
for featureNum = :numFeatures % convolution of image with feature matrix for each channel
convolvedImage = zeros(imageDim - patchDim + , imageDim - patchDim + );
for channel = :imageChannels % Obtain the feature (patchDim x patchDim) needed during the convolution
% ---- YOUR CODE HERE ----
feature = zeros(patchDim,patchDim); % You should replace this feature = reshape(W(featureNum, (channel - ) * patchDim * patchDim + : channel * patchDim * patchDim ), patchDim, patchDim); % ------------------------ % Flip the feature matrix because of the definition of convolution, as explained later
feature = rot90(squeeze(feature),); % Obtain the image
im = squeeze(images(:, :, channel, imageNum)); % Convolve "feature" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution
% ---- YOUR CODE HERE ---- convolvedImage = convolvedImage + conv2(im, feature, 'valid'); % ------------------------ end % Subtract the bias unit (correcting for the mean subtraction as well)
% Then, apply the sigmoid function to get the hidden activation
% ---- YOUR CODE HERE ---- convolvedImage = sigmoid(convolvedImage + b(featureNum)); % ------------------------ % The convolved feature is the sum of the convolved values for all channels
convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
end
end end function sigm = sigmoid(x)
sigm = ./ ( + exp(-x));
end

cnnPool.m

function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
% poolDim - dimension of pooling region
% convolvedFeatures - convolved features to pool (as given by cnnConvolve)
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
% pooledFeatures - matrix of pooled features in the form
% pooledFeatures(featureNum, imageNum, poolRow, poolCol)
% numImages = size(convolvedFeatures, );
numFeatures = size(convolvedFeatures, );
convolvedDim = size(convolvedFeatures, ); pooledFeatures = zeros(numFeatures, numImages, floor(convolvedDim / poolDim), floor(convolvedDim / poolDim)); % -------------------- YOUR CODE HERE --------------------
% Instructions:
% Now pool the convolved features in regions of poolDim x poolDim,
% to obtain the
% numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim)
% matrix pooledFeatures, such that
% pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the
% value of the featureNum feature for the imageNum image pooled over the
% corresponding (poolRow, poolCol) pooling region
% (see http://ufldl/wiki/index.php/Pooling )
%
% Use mean pooling here.
% -------------------- YOUR CODE HERE -------------------- poolRow = floor(convolvedDim / poolDim);
poolCol = poolRow; for i = :numFeatures
for j = :numImages
for k = :poolRow
for l = :poolCol
pooledFeatures(i, j, k, l) = mean(mean(convolvedFeatures(i, j, (k-)*poolDim+:k*poolDim, (l-)*poolDim+:l*poolDim)));
end
end
end
end end

Accuracy: 80.500%

【DeepLearning】Exercise:Convolution and Pooling的更多相关文章

  1. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  2. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  5. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  6. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  7. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  8. 【DeepLearning】Exercise:Sparse Autoencoder

    Exercise:Sparse Autoencoder 习题的链接:Exercise:Sparse Autoencoder 注意点: 1.训练样本像素值需要归一化. 因为输出层的激活函数是logist ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. 图片上传前预览、压缩、转blob、转formData等操作

    直接上代码吧: <template> <div> <div class="header">添加淘宝买号</div> <div ...

  2. 文本相似度-BM25算法

    BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms app ...

  3. Pearson(皮尔逊)相关系数

    Pearson(皮尔逊)相关系数:也叫pearson积差相关系数.衡量两个连续变量之间的线性相关程度. 当两个变量都是正态连续变量,而且两者之间呈线性关系时,表现这两个变量之间相关程度用积差相关系数, ...

  4. SQL 连接操作 及 查询分析

  5. Redis实战总结-配置、持久化、复制

    Redis的配置主要放置在redis.conf,可以通过修改配置文件实现Redis许多特性,比如复制,持久化,集群等. redis.conf部分配置详解 # 启动redis,显示加载配置redis.c ...

  6. 【算法】插入排序(Insertion Sort)

    (PS:内容参考MIT算法导论) 插入排序(Insertion Sort): 适用于数目较少的元素排序 伪代码(Pseudocode): 例子(Example): 符号(notation): 时间复杂 ...

  7. 关于ThinkPHP的一些编程技巧

    在TP学习过程中难免会遇到一些大大小小的问题,把这些问题积累下来就可以在以后遇到时能很快速的解决,提高编程效率. 1.让Runtime下的文件格式化:入口文件处:define(‘STRIP_RUNTI ...

  8. iOS开发技巧 - 使用UISlider来调整值的范围

    (Swift) import UIKit class ViewController: UIViewController { var slider: UISlider! func sliderValue ...

  9. 如何设置Vmware下Linux系统全屏显示

    环境:Vmware10+RedHat5 在Vmware10中安装好RedHat5后,即使点击了全屏按钮(或使用快捷键Ctrl+Alt+Enter),全屏的效果依然不尽人意,跟下图中差不多,RedHat ...

  10. 使用JAVA的URL类处理url事例

    import java.net.*; import java.io.*; public class ParseURL { public static void main(String[] args) ...