Exercise:Sparse Autoencoder

习题的链接:Exercise:Sparse Autoencoder

注意点:

1、训练样本像素值需要归一化。

因为输出层的激活函数是logistic函数,值域(0,1),

如果训练样本每个像素点没有进行归一化,那将无法进行自编码。

2、训练阶段,向量化实现比for循环实现快十倍。

3、最后产生的图片阵列是将W1权值矩阵的转置,每一列作为一张图片。

第i列其实就是最大可能激活第i个隐藏节点的图片xi,再乘以常数因子C(其中C就是W1第i行元素的平方和)。

证明可见:Visualizing a Trained Autoencoder

我的实现:

sampleIMAGES.m

function patches = sampleIMAGES()
% sampleIMAGES
% Returns patches for training load IMAGES; % load images from disk patchsize = ; % we'll use 8x8 patches
numpatches = ; % Initialize patches with zeros. Your code will fill in this matrix--one
% column per patch, columns.
patches = zeros(patchsize*patchsize, numpatches); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Fill in the variable called "patches" using data
% from IMAGES.
%
% IMAGES is a 3D array containing images
% For instance, IMAGES(:,:,) is a 512x512 array containing the 6th image,
% and you can type "imagesc(IMAGES(:,:,6)), colormap gray;" to visualize
% it. (The contrast on these images look a bit off because they have
% been preprocessed using using "whitening." See the lecture notes for
% more details.) As a second example, IMAGES(:,:,) is an image
% patch corresponding to the pixels in the block (,) to (,) of
% Image for i=:numpatches
% generate random row&col number [, -patchsize+=]
% generate random IMAGES id [, ]
row = round( + rand(,)*);
col = round( + rand(,)*);
pid = round( + rand(,)*);
patches(:, i) = reshape(IMAGES(row:row+, col:col+, pid), patchsize*patchsize, );
end %% ---------------------------------------------------------------
% For the autoencoder to work well we need to normalize the data
% Specifically, since the output of the network is bounded between [,]
% (due to the sigmoid activation function), we have to make sure
% the range of pixel values is also bounded between [,]
patches = normalizeData(patches); end %% ---------------------------------------------------------------
function patches = normalizeData(patches) % Squash data to [0.1, 0.9] since we use sigmoid as the activation
% function in the output layer % Remove DC (mean of images).
patches = bsxfun(@minus, patches, mean(patches)); % Truncate to +/- standard deviations and scale to - to
pstd = * std(patches(:));
patches = max(min(patches, pstd), -pstd) / pstd; % Rescale from [-,] to [0.1,0.9]
patches = (patches + ) * 0.4 + 0.1; end

computeNumericalGradient.m

function numgrad = computeNumericalGradient(J, theta)
% numgrad = computeNumericalGradient(J, theta)
% theta: a vector of parameters (column vector)
% J: a function that outputs a real-number. Calling y = J(theta) will return the
% function value at theta. % Initialize numgrad with zeros
numgrad = zeros(size(theta)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions:
% Implement numerical gradient checking, and return the result in numgrad.
% (See Section 2.3 of the lecture notes.)
% You should write code so that numgrad(i) is (the numerical approximation to) the
% partial derivative of J with respect to the i-th input argument, evaluated at theta.
% I.e., numgrad(i) should be the (approximately) the partial derivative of J with
% respect to theta(i).
%
% Hint: You will probably want to compute the elements of numgrad one at a time. N = size(theta, );
EPSILON = 1e-;
Identity = eye(N); for i = :N
numgrad(i,:) = (J(theta + EPSILON * Identity(:, i)) - J(theta - EPSILON * Identity(:, i))) / ( * EPSILON);
end %% ---------------------------------------------------------------
end

sparseAutoencoderCost.m

function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data) % visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
% notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. % W1 is a hiddenSize * visibleSize matrix
W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2 is a visibleSize * hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+:*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1 is a hiddenSize * vector
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
% b2 is a visible * vector
b2 = theta(*hiddenSize*visibleSize+hiddenSize+:end); % Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = ;
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term
% [(/m) \Delta W^{()} + \lambda W^{()}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
% numCases = size(data, ); % forward propagation
z2 = W1 * data + repmat(b1, , numCases);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, , numCases);
a3 = sigmoid(z3); % error
sqrerror = (data - a3) .* (data - a3);
error = sum(sum(sqrerror)) / ( * numCases);
% weight decay
wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / ;
% sparsity
rho = sum(a2, ) ./ numCases;
divergence = sparsityParam .* log(sparsityParam ./ rho) + ( - sparsityParam) .* log(( - sparsityParam) ./ ( - rho));
sparsity = sum(divergence); cost = error + lambda * wtdecay + beta * sparsity; % delta3 is a visibleSize * numCases matrix
delta3 = -(data - a3) .* sigmoiddiff(z3);
% delta2 is a hiddenSize * numCases matrix
sparsityterm = beta * (-sparsityParam ./ rho + (-sparsityParam) ./ (-rho));
delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1;
b1grad = sum(delta2, ) ./ numCases; W2grad = delta3 * a2' ./ numCases + lambda * W2;
b2grad = sum(delta3, ) ./ numCases; %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = ./ ( + exp(-x));
end function sigmdiff = sigmoiddiff(x) sigmdiff = sigmoid(x) .* ( - sigmoid(x));
end

最终训练结果:

【DeepLearning】Exercise:Sparse Autoencoder的更多相关文章

  1. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  2. 【DeepLearning】Exercise:Convolution and Pooling

    Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294 ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  5. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  6. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  7. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  8. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. MFC中页面设置对话框CPageSetupDialog

    void CCPageSetupDialogView::OnPageSetting() { CPageSetupDialog dlg; // 利用默认参数构造页面设置对话框 if(dlg.DoModa ...

  2. Python 中parse.quote类似JavaScript encodeURI() 函数

    from urllib import parse jsRet = 'roleO%2f'print(parse.unquote(jsRet))print(parse.quote(jsRet))输出: r ...

  3. 机器学习算法与Python实践之(七)逻辑回归(Logistic Regression)

    http://blog.csdn.net/zouxy09/article/details/20319673 机器学习算法与Python实践之(七)逻辑回归(Logistic Regression) z ...

  4. Linux系统下对NFS服务安全加固的方法

    NFS(Network File System)是 FreeBSD 支持的一种文件系统,它允许网络中的计算机之间通过 TCP/IP 网络共享资源.不正确的配置和使用 NFS,会带来安全问题. 概述 N ...

  5. Linux上传和下载之Xshell

    一.安装与授权 安装时候需要注意的是,选择 Free For Home/School选项进行安装,如下图所示安装成功后 二.上传 上传需要使用rz命令,如下图所示,第一次可能会提示你命令无效或者提示你 ...

  6. esUtil.h中的m变量报错

    引用了OpenGL ES自带的esUtil.h, 编译的时候报错:     typedef struct     {         GLfloat m[4][4];     } ESMatrix; ...

  7. javascript中的关联数组

    所谓关联数组(associative array), 就是指javascript中的对象. 因为javascript中的属性就是一个个的键值对,可以通过obj[attr]的方式访问,很类似数组. 这种 ...

  8. 从MyEclipse到IntelliJ IDEA ——让你脱键盘,全键盘操作

    从MyEclipse到IntelliJ IDEA ——让你脱键盘,全键盘操作 从MyEclipse转战到IntelliJ IDEA的经历 我一个朋友写了一篇“从Eclipse到Android Stud ...

  9. 微信小程序 - (下拉)加载更多数据

    注意和后端配合就行了,前端也只能把数据拼接起来! 无论是下拉加载还是加载更多,一样的道理! 注意首次加载传递参数 注意每次加载数据数 wxml <view class='table-rank'& ...

  10. Maven项目同时使用lib下的Jar包

    测试于:Maven 3.0.5, eclipse-jee-indigo-SR2-win32 配置步骤: 在WEB-INF下新建lib目录并加入自己的包: 右键项目 -> Build Path - ...