Exercise:Learning color features with Sparse Autoencoders

习题链接:Exercise:Learning color features with Sparse Autoencoders

sparseAutoencoderLinearCost.m

function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data)
% -------------------- YOUR CODE HERE --------------------
% Instructions:
% Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
% earlier exercise onto this file, renaming the function to
% sparseAutoencoderLinearCost, and changing the autoencoder to use a
% linear decoder.
% -------------------- YOUR CODE HERE -------------------- % W1 is a hiddenSize * visibleSize matrix
W1 = reshape(theta(:hiddenSize*visibleSize), hiddenSize, visibleSize);
% W2 is a visibleSize * hiddenSize matrix
W2 = reshape(theta(hiddenSize*visibleSize+:*hiddenSize*visibleSize), visibleSize, hiddenSize);
% b1 is a hiddenSize * vector
b1 = theta(*hiddenSize*visibleSize+:*hiddenSize*visibleSize+hiddenSize);
% b2 is a visible * vector
b2 = theta(*hiddenSize*visibleSize+hiddenSize+:end); numCases = size(data, ); % forward propagation
z2 = W1 * data + repmat(b1, , numCases);
a2 = sigmoid(z2);
z3 = W2 * a2 + repmat(b2, , numCases);
a3 = z3; % error
sqrerror = (data - a3) .* (data - a3);
error = sum(sum(sqrerror)) / ( * numCases);
% weight decay
wtdecay = (sum(sum(W1 .* W1)) + sum(sum(W2 .* W2))) / ;
% sparsity
rho = sum(a2, ) ./ numCases;
divergence = sparsityParam .* log(sparsityParam ./ rho) + ( - sparsityParam) .* log(( - sparsityParam) ./ ( - rho));
sparsity = sum(divergence); cost = error + lambda * wtdecay + beta * sparsity; % delta3 is a visibleSize * numCases matrix
delta3 = -(data - a3);
% delta2 is a hiddenSize * numCases matrix
sparsityterm = beta * (-sparsityParam ./ rho + (-sparsityParam) ./ (-rho));
delta2 = (W2' * delta3 + repmat(sparsityterm, 1, numCases)) .* sigmoiddiff(z2); W1grad = delta2 * data' ./ numCases + lambda * W1;
b1grad = sum(delta2, ) ./ numCases; W2grad = delta3 * a2' ./ numCases + lambda * W2;
b2grad = sum(delta3, ) ./ numCases; %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end function sigm = sigmoid(x) sigm = ./ ( + exp(-x));
end function sigmdiff = sigmoiddiff(x) sigmdiff = sigmoid(x) .* ( - sigmoid(x));
end

如果跑出来是这样的,可能是把a3 = z3写成了a3 = sigmoid(z3)

【DeepLearning】Exercise:Learning color features with Sparse Autoencoders的更多相关文章

  1. 【DeepLearning】Exercise:Self-Taught Learning

    Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [ ...

  2. 【DeepLearning】Exercise:Convolution and Pooling

    Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294 ...

  3. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  4. 【DeepLearning】Exercise:PCA and Whitening

    Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%============================= ...

  5. 【DeepLearning】Exercise:Softmax Regression

    Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = ...

  6. 【DeepLearning】Exercise:PCA in 2D

    Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%=================================== ...

  7. 【DeepLearning】Exercise:Vectorization

    Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Au ...

  8. 【DeepLearning】Exercise:Sparse Autoencoder

    Exercise:Sparse Autoencoder 习题的链接:Exercise:Sparse Autoencoder 注意点: 1.训练样本像素值需要归一化. 因为输出层的激活函数是logist ...

  9. 【UFLDL】Exercise: Convolutional Neural Network

    这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结 ...

随机推荐

  1. random_state 参数

    SVC(random_state=0)里有参数 random_state random_state 相当于随机数种子,下面会有代码来解释其作用.图中设置了 random.seed() 就相当于在 SV ...

  2. php随机生成汉字实现方法

    GB 2312-80 是中国国家标准简体中文字符集,全称<信息交换用汉字编码字符集·基本集>,由中国国家标准总局发布,1981年5月1日实施.GB2312 编码通行于中国大陆:新加坡等地也 ...

  3. 如何实现json字符串和 BsonDocument的互相转换

    String to BsonDocument string json = "{ 'foo' : 'bar' }"; MongoDB.Bson.BsonDocument docume ...

  4. XE6入门(一)Hello World

    XE6的IDE已经设计的非常棒了,是该放弃D7了,投入XE6的怀抱.. 本人用的XE6版本是 Embarcadero.Delphi.XE6.RTM.Inc.Update1.v20.0.16277.12 ...

  5. PHP开发框架比较

    PHP开发框架比较 Laravel 是一个简单优雅的 PHP WEB 开发框架,将你从意大利面条式的代码中解放出来.通过简单.优雅.表达式语法开发出很棒的 WEB应用!但是通过使用我们发现Larave ...

  6. python之simplejson,Python版的简单、 快速、 可扩展 JSON 编码器/解码器

    python之simplejson,Python版的简单. 快速. 可扩展 JSON 编码器/解码器 simplejson Python版的简单. 快速. 可扩展 JSON 编码器/解码器 编码基本的 ...

  7. SQL还原后:目录名称无效

    使用Sql Server备份文件,还原数据库出现如下错误:目录名称无效 解决方法:在系统临时文件夹内,如C:\Users\Administrator\AppData\Local\Temp\ 下新建名称 ...

  8. Android Studio 之 导入Eclipse项目常见问题及解决方案

    在将Eclipse做的Android项目成功导入Android Studio 后,启动生成,遇到一些问题,现总结如下: 问题1:图片命名问题 AS对图片命名要求比eclipse严格,图片名称只能有&q ...

  9. Linux中awk命令的简单用法

    一.用例1: cat /proc/meminfo|grep "MemTotal"|awk '{print $2}' 说明,$2表示第2位,$0表示全部,如需表示$,可用$$转义.

  10. [转]Maven介绍,包括作用、核心概念、用法、常用命令、扩展及配置

    转自:http://www.trinea.cn/android/maven/ 两年半前写的关于Maven的介绍,现在看来都还是不错的,自己转下.写博客的一大好处就是方便自己以后查阅,自己总结的总是最靠 ...