Exercise:Sparse Autoencoder

习题的链接:Exercise:Sparse Autoencoder

注意点:

1、训练样本像素值需要归一化。

因为输出层的激活函数是logistic函数,值域(0,1),

如果训练样本每个像素点没有进行归一化,那将无法进行自编码。

2、训练阶段,向量化实现比for循环实现快十倍。

3、最后产生的图片阵列是将W1权值矩阵的转置,每一列作为一张图片。

第i列其实就是最大可能激活第i个隐藏节点的图片xi,再乘以常数因子C(其中C就是W1第i行元素的平方和)。

证明可见:Visualizing a Trained Autoencoder

我的实现:

sampleIMAGES.m

function patches = sampleIMAGES()
% sampleIMAGES
% Returns patches for training load IMAGES; % load images from disk patchsize = ; % we'll use 8x8 patches
numpatches = ; % Initialize patches with zeros. Your code will fill in this matrix--one
% column per patch, columns.
patches = zeros(patchsize*patchsize, numpatches); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Fill in the variable called "patches" using data
% from IMAGES.
%
% IMAGES is a 3D array containing images
% For instance, IMAGES(:,:,) is a 512x512 array containing the 6th image,
% and you can type "imagesc(IMAGES(:,:,6)), colormap gray;" to visualize
% it. (The contrast on these images look a bit off because they have
% been preprocessed using using "whitening." See the lecture notes for
% more details.) As a second example, IMAGES(:,:,) is an image
% patch corresponding to the pixels in the block (,) to (,) of
% Image for i=:numpatches
% generate random row&col number [, -patchsize+=]
% generate random IMAGES id [, ]
row = round( + rand(,)*);
col = round( + rand(,)*);
pid = round( + rand(,)*);
patches(:, i) = reshape(IMAGES(row:row+, col:col+, pid), patchsize*patchsize, );
end %% ---------------------------------------------------------------
% For the autoencoder to work well we need to normalize the data
% Specifically, since the output of the network is bounded between [,]
% (due to the sigmoid activation function), we have to make sure
% the range of pixel values is also bounded between [,]
patches = normalizeData(patches); end %% ---------------------------------------------------------------
function patches = normalizeData(patches) % Squash data to [0.1, 0.9] since we use sigmoid as the activation
% function in the output layer % Remove DC (mean of images).
patches = bsxfun(@minus, patches, mean(patches)); % Truncate to +/- standard deviations and scale to - to
pstd = * std(patches(:));
patches = max(min(patches, pstd), -pstd) / pstd; % Rescale from [-,] to [0.1,0.9]
patches = (patches + ) * 0.4 + 0.1; end

computeNumericalGradient.m

function numgrad = computeNumericalGradient(J, theta)
% numgrad = computeNumericalGradient(J, theta)
% theta: a vector of parameters (column vector)
% J: a function that outputs a real-number. Calling y = J(theta) will return the
% function value at theta. % Initialize numgrad with zeros
numgrad = zeros(size(theta)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions:
% Implement numerical gradient checking, and return the result in numgrad.
% (See Section 2.3 of the lecture notes.)
% You should write code so that numgrad(i) is (the numerical approximation to) the
% partial derivative of J with respect to the i-th input argument, evaluated at theta.
% I.e., numgrad(i) should be the (approximately) the partial derivative of J with
% respect to theta(i).
%
% Hint: You will probably want to compute the elements of numgrad one at a time. N = size(theta, );
EPSILON = 1e-;
Identity = eye(N); for i = :N
numgrad(i,:) = (J(theta + EPSILON * Identity(:, i)) - J(theta - EPSILON * Identity(:, i))) / ( * EPSILON);
end %% ---------------------------------------------------------------
end

sparseAutoencoderCost.m

function [cost,grad] = sparseAutoencoderCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data) % visibleSize: the number of input units (probably )
% hiddenSize: the number of hidden units (probably )
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
% notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. % W1 is a hiddenSize * visibleSize matrix
W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2 is a visibleSize * hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+:*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1 is a hiddenSize * vector
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
% b2 is a visible * vector
b2 = theta(*hiddenSize*visibleSize+hiddenSize+:end); % Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = ;
W1grad = zeros(size(W1));
W2grad = zeros(size(W2));
b1grad = zeros(size(b1));
b2grad = zeros(size(b2)); %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term
% [(/m) \Delta W^{()} + \lambda W^{()}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
% numCases = size(data, ); % forward propagation
z2 = W1 * data + repmat(b1, , numCases);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, , numCases);
a3 = sigmoid(z3); % error
sqrerror = (data - a3) .* (data - a3);
error = sum(sum(sqrerror)) / ( * numCases);
% weight decay
wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / ;
% sparsity
rho = sum(a2, ) ./ numCases;
divergence = sparsityParam .* log(sparsityParam ./ rho) + ( - sparsityParam) .* log(( - sparsityParam) ./ ( - rho));
sparsity = sum(divergence); cost = error + lambda * wtdecay + beta * sparsity; % delta3 is a visibleSize * numCases matrix
delta3 = -(data - a3) .* sigmoiddiff(z3);
% delta2 is a hiddenSize * numCases matrix
sparsityterm = beta * (-sparsityParam ./ rho + (-sparsityParam) ./ (-rho));
delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1;
b1grad = sum(delta2, ) ./ numCases; W2grad = delta3 * a2' ./ numCases + lambda * W2;
b2grad = sum(delta3, ) ./ numCases; %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = ./ ( + exp(-x));
end function sigmdiff = sigmoiddiff(x) sigmdiff = sigmoid(x) .* ( - sigmoid(x));
end

最终训练结果:

【DeepLearning】Exercise:Sparse Autoencoder的更多相关文章

  1. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  2. 【DeepLearning】Exercise:Convolution and Pooling

    Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294 ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  5. 【DeepLearning】Exercise:Learning color features with Sparse Autoencoders

    Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with ...

  6. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  7. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  8. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. background: inherit制作倒影、单行居中两行居左超过两行省略

    1.background: inherit;制作倒影 方法很多,但是我们当然要寻找最快最便捷的方法,至少得是无论图片怎么变化,div 大小怎么变化,我们都不用去改我们的代码. -webkit-box- ...

  2. Key Vertex (hdu 3313 SPFA+DFS 求起点到终点路径上的割点)

    Key Vertex Time Limit: 10000/5000 MS (Java/Others)    Memory Limit: 65536/65536 K (Java/Others) Tota ...

  3. Java归去来第2集:利用Eclipse创建Maven Web项目

    一.前言 如果还不了解剧情,请返回第一集的剧情          Java归去来第1集:手动给Eclipse配置Maven环境 二.利用Eclipse创建Maven Web项目 选择File-New- ...

  4. Eclipse Maven项目报错2之A child container failed during start

    问题:在同事那里拿了一个Eclipse的maven项目,导入报错,主要显示的是A child container failed during start 具体错误如下 六月 02, 2018 12:0 ...

  5. mysql 批量数据循环插入

    双重循环插入 DELIMITER ;; CREATE PROCEDURE test_insert() BEGIN DECLARE a INT DEFAULT 1; DECLARE b TINYINT ...

  6. JAVA设计模式——第 8 章 适配器模式【Adapter Pattern】(转)

    好,请安静,后排聊天的同学别吵醒前排睡觉的同学了,大家要相互理解嘛.今天讲适配器模式,这个模式也很简单,你笔记本上的那个拖在外面的黑盒子就是个适配器,一般你在中国能用,在日本也能用,虽然两个国家的的电 ...

  7. JERSEY中文翻译(第一章、Getting Started、2.2)

    前言 这是jersey2.2的用户向导,我们会尽力维护它的更新并且也会增加新的章节.当阅读本用户指南的时候,也要参阅Jersey API 文档,额外的信息补充JERSEY的新特性和API 如果你想要为 ...

  8. Linux多线程同步之相互排斥量和条件变量

    1. 什么是相互排斥量 相互排斥量从本质上说是一把锁,在訪问共享资源前对相互排斥量进行加锁,在訪问完毕后释放相互排斥量上的锁. 对相互排斥量进行加锁以后,不论什么其它试图再次对相互排斥量加锁的线程将会 ...

  9. SSH协议

    SSH是一种协议,实现计算机之间的加密登录,即使被截获,截获的也只是加密后的密文,不会泄密. 如果每次登录另外一台计算机,都需要输入密码,就显得太麻烦,所以SSH协议实现了无密码登录,即公钥登录.所谓 ...

  10. Linux账号管理与ACL权限设置

    1:UID和GID 用户ID:在/etc/passwd中 群组ID:在/etc/group中 2:有效群组与初始群组 初始群组:/etc/passwd文件里面的GID 有效群组: groups #查看 ...