Exercise:Learning color features with Sparse Autoencoders

习题链接:Exercise:Learning color features with Sparse Autoencoders

sparseAutoencoderLinearCost.m

function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data)
% -------------------- YOUR CODE HERE --------------------
% Instructions:
% Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
% earlier exercise onto this file, renaming the function to
% sparseAutoencoderLinearCost, and changing the autoencoder to use a
% linear decoder.
% -------------------- YOUR CODE HERE -------------------- % W1 is a hiddenSize * visibleSize matrix
W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2 is a visibleSize * hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+:*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1 is a hiddenSize * vector
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
% b2 is a visible * vector
b2 = theta(*hiddenSize*visibleSize+hiddenSize+:end); numCases = size(data, ); % forward propagation
z2 = W1 * data + repmat(b1, , numCases);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, , numCases);
a3 = z3; % error
sqrerror = (data - a3) .* (data - a3);
error = sum(sum(sqrerror)) / ( * numCases);
% weight decay
wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / ;
% sparsity
rho = sum(a2, ) ./ numCases;
divergence = sparsityParam .* log(sparsityParam ./ rho) + ( - sparsityParam) .* log(( - sparsityParam) ./ ( - rho));
sparsity = sum(divergence); cost = error + lambda * wtdecay + beta * sparsity; % delta3 is a visibleSize * numCases matrix
delta3 = -(data - a3);
% delta2 is a hiddenSize * numCases matrix
sparsityterm = beta * (-sparsityParam ./ rho + (-sparsityParam) ./ (-rho));
delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1;
b1grad = sum(delta2, ) ./ numCases; W2grad = delta3 * a2' ./ numCases + lambda * W2;
b2grad = sum(delta3, ) ./ numCases; %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end function sigm = sigmoid(x) sigm = ./ ( + exp(-x));
end function sigmdiff = sigmoiddiff(x) sigmdiff = sigmoid(x) .* ( - sigmoid(x));
end

如果跑出来是这样的,可能是把a3 = z3写成了a3 = sigmoid(z3)

【DeepLearning】Exercise:Learning color features with Sparse Autoencoders的更多相关文章

  1. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  2. 【DeepLearning】Exercise:Convolution and Pooling

    Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294 ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  5. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  6. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  7. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  8. 【DeepLearning】Exercise:Sparse Autoencoder

    Exercise:Sparse Autoencoder 习题的链接:Exercise:Sparse Autoencoder 注意点: 1.训练样本像素值需要归一化. 因为输出层的激活函数是logist ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. 命令行打印文件树列表: tree

    Linux & Mac 1.下载tree lib //mac brew install tree //centos yum install tree //ubuntu apt-get inst ...

  2. 使用nginx反向代理到不同服务器(共享同一端口)配置文件

    使用nginx反向代理到不同服务器(共享同一端口)配置文件 https://blog.csdn.net/wang_k_123/article/details/72779443 https://www. ...

  3. OnBecameVisible和OnBecameInvisible ,OnWillRenderObject

    OnBecameVisible 和 OnBecameInvisible ,OnWillRenderObject 只有在所挂物体(不包括子物体)有render才有效 //可见 private void ...

  4. LeetCode 292 Nim Game(Nim游戏)

    翻译 你正在和你的朋友们玩以下这个Nim游戏:桌子上有一堆石头.每次你从中去掉1-3个.谁消除掉最后一个石头即为赢家.你在取出石头的第一轮. 你们中的每个人都有着聪明的头脑和绝佳的策略.写一个函数来确 ...

  5. Jquery中的高度

    $('.someElement').height(); // returns the calculated pixel height of the element(s) $(window).heigh ...

  6. gson ajax 数字精度丢失

    ajax传输的json,gson会发生丢失,long > 15的时候会丢失0 解决方案:直接把属性为long的属性自动加上双引号成为js的字符串,这样就不会发生丢失了,ajax自动识别为字符串. ...

  7. Oracle整形转字符串to_char()

    使用to_char()将NUMBER转换为字符串: select to_char(AW_PROCESSSTATUS ) as PROCESSSTATUS from A

  8. mycat系列-Mycat 分片规则

    分片规则概述 在数据切分处理中,特别是水平切分中,中间件最终要的两个处理过程就是数据的切分.数据的聚合.选择合适的切分规则,至关重要,因为它决定了后续数据聚合的难易程度,甚至可以避免跨库的数据聚合处理 ...

  9. iOS9 Error Domain=NSURLErrorDomain Code=-1022 App Transport Security (ATS)

    iOS 9在HTTP 访问时会出错  iOS9 Error Domain=NSURLErrorDomain Code=-1022 这时需要修改info.plist 文件 在Info.plist中添加N ...

  10. 解决Visio复制绘图时虚框变实框的问题

    参考:http://www.educity.cn/help/653700.html 问题好像是,在VISIO里只要虚线框的大小超过一个界限,拷贝之后就会变成实线框. 解决办法是修改注册表:[运行reg ...