UFLDL教程笔记及练习答案五(自编码线性解码器与处理大型图像**卷积与池化)
自己主动编码线性解码器
自己主动编码线性解码器主要是考虑到稀疏自己主动编码器最后一层输出假设用sigmoid函数。因为稀疏自己主动编码器学习是的输出等于输入。simoid函数的值域在[0,1]之间,这就要求输入也必须在[0,1]之间。这是对输入特征的隐藏限制。为了解除这一限制,我们能够使最后一层用线性函数及a
= z
习题答案:
SparseAutoEncoderLinerCost.m
function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ...
lambda, sparsityParam, beta, data)
% -------------------- YOUR CODE HERE --------------------
% Instructions:
% Copy sparseAutoencoderCost in sparseAutoencoderCost.m from your
% earlier exercise onto this file, renaming the function to
% sparseAutoencoderLinearCost, and changing the autoencoder to use a
% linear decoder.
% -------------------- YOUR CODE HERE -------------------- % visibleSize: the number of input units (probably 64)
% hiddenSize: the number of hidden units (probably 25)
% lambda: weight decay parameter
% sparsityParam: The desired average activation for the hidden units (denoted in the lecture
% notes by the greek alphabet rho, which looks like a lower-case "p").
% beta: weight of sparsity penalty term
% data: Our 64x10000 matrix containing the training data. So, data(:,i) is the i-th training example. % The input theta is a vector (because minFunc expects the parameters to be a vector).
% We first convert theta to the (W1, W2, b1, b2) matrix/vector format, so that this
% follows the notation convention of the lecture notes. W1 = reshape(theta(1:hiddenSize*visibleSize), hiddenSize, visibleSize); %W1为25*64
W2 = reshape(theta(hiddenSize*visibleSize+1:2*hiddenSize*visibleSize), visibleSize, hiddenSize); % W2为64*25
b1 = theta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); % b1为25维
b2 = theta(2*hiddenSize*visibleSize+hiddenSize+1:end); %b2为64维 % Cost and gradient variables (your code needs to compute these values).
% Here, we initialize them to zeros.
cost = 0;
W1grad = zeros(size(W1)); %W1grad 为25*64
W2grad = zeros(size(W2)); %W2grad为64*25
b1grad = zeros(size(b1)); % 25 hidden
b2grad = zeros(size(b2)); %64 visible %% ---------- YOUR CODE HERE --------------------------------------
% Instructions: Compute the cost/optimization objective J_sparse(W,b) for the Sparse Autoencoder,
% and the corresponding gradients W1grad, W2grad, b1grad, b2grad.
%
% W1grad, W2grad, b1grad and b2grad should be computed using backpropagation.
% Note that W1grad has the same dimensions as W1, b1grad has the same dimensions
% as b1, etc. Your code should set W1grad to be the partial derivative of J_sparse(W,b) with
% respect to W1. I.e., W1grad(i,j) should be the partial derivative of J_sparse(W,b)
% with respect to the input parameter W1(i,j). Thus, W1grad should be equal to the term
% [(1/m) \Delta W^{(1)} + \lambda W^{(1)}] in the last block of pseudo-code in Section 2.2
% of the lecture notes (and similarly for W2grad, b1grad, b2grad).
%
% Stated differently, if we were using batch gradient descent to optimize the parameters,
% the gradient descent update to W1 would be W1 := W1 - alpha * W1grad, and similarly for W2, b1, b2.
% %1.forward propagation
data_size=size(data); % [64, 10000]
active_value2=repmat(b1,1,data_size(2)); % 将b1扩展10000列 25*10000
active_value3=repmat(b2,1,data_size(2)); % 将b2扩展10000列 64*10000
active_value2=sigmoid(W1*data+active_value2); %隐结点的值 矩阵表示全部的样本 25*10000 一列表示一个样本 hidden
active_value3=W2*active_value2+active_value3; %输出结点的值 矩阵表示全部的样本 64*10000 一列表示一个样本 output
%2.computing error term and cost
ave_square=sum(sum((active_value3-data).^2)./2)/data_size(2); %cost第一项 最小平方和
weight_decay=lambda/2*(sum(sum(W1.^2))+sum(sum(W2.^2))); %cost第二项 全部參数的平方和 贝叶斯学派 p_real=sum(active_value2,2)./data_size(2); % 稀疏惩处项中的预计p 为25维
p_para=repmat(sparsityParam,hiddenSize,1); %稀疏化參数
sparsity=beta.*sum(p_para.*log(p_para./p_real)+(1-p_para).*log((1-p_para)./(1-p_real))); %KL diversion
cost=ave_square+weight_decay+sparsity; % 终于的cost function delta3=(active_value3-data); % 为error 是64*10000 矩阵表示全部的样本,每一列表示一个样本
average_sparsity=repmat(sum(active_value2,2)./data_size(2),1,data_size(2)); %求error中的稀疏项
default_sparsity=repmat(sparsityParam,hiddenSize,data_size(2)); %稀疏化參数
sparsity_penalty=beta.*(-(default_sparsity./average_sparsity)+((1-default_sparsity)./(1-average_sparsity)));
delta2=(W2'*delta3+sparsity_penalty).*((active_value2).*(1-active_value2)); %error 是25*10000 矩阵表示全部的样本,每一列表示一个样本
%3.backword propagation
W2grad=delta3*active_value2'./data_size(2)+lambda.*W2; % 梯度 为64*25
W1grad=delta2*data'./data_size(2)+lambda.*W1; %梯度 为25*64
b2grad=sum(delta3,2)./data_size(2); %64 visible
b1grad=sum(delta2,2)./data_size(2); % 25 hidden %-------------------------------------------------------------------
% After computing the cost and gradient, we will convert the gradients back
% to a vector format (suitable for minFunc). Specifically, we will unroll
% your gradient matrices into a vector. grad = [W1grad(:) ; W2grad(:) ; b1grad(:) ; b2grad(:)]; end %-------------------------------------------------------------------
% Here's an implementation of the sigmoid function, which you may find useful
% in your computation of the costs and the gradients. This inputs a (row or
% column) vector (say (z1, z2, z3)) and returns (f(z1), f(z2), f(z3)). function sigm = sigmoid(x) sigm = 1 ./ (1 + exp(-x));
end
处理大型图像
处理大型图像主要採用的是卷积和池化。卷积来源于自然图像有其固有特性,也就是说,图像的一部分的统计特性与其它部分是一样的。
这也意味着我们在这一部分学习的特征也能用在还有一部分上,所以对于这个图像上的全部位置,我们都能使用相同的学习特征。过程是首先须要用无标签的数据对图像进行训练得到一个稀疏自编码器。这种參数就是hidden*inputlayer 的矩阵。对每个1*inputlayer的參数w与大图像做卷积。
卷积的计算过程是,该层每一个feature map的卷积核与输入图像的三通道做卷积。然后结果相加,再加上偏置參数,再取sigmoid函数,结果作为该feature map。
池化主要考虑卷积得到的特征过多易产生过拟合。然图像具有一种“静态性”的属性,意味着在一个图像区域实用的特征极有可能在另外一个区域相同适用,因此我们能够对不同位置的特征进行聚合统计(平均池化和最大池化)。
池化的计算过程,为在上一层feature map的p*q区域内取均值或者最大值。
对于一副m*n大小图像,设k为隐结点的数量,a*b为输入结点数量,那么通过卷积后会得到k*(m-a+1)*(n-b+1)维的特征向量。设[p,q]为pooling窗体的大小,那么pooling后特征维数就为k*(m-a+1)/p *(n-b+1)/q
训练:卷积神经网络的训练能够採用BP,用到的都是有监督学习。公式推导见这篇blog:http://blog.csdn.net/lu597203933/article/details/46575871。。
。
这里提供了第二种思路,在线性解码器的练习中。用8*8的小图片(大图片上随机裁剪的)训练稀疏自编码器,当中有400个隐藏层,针对于大图片,400就相当于feature map的个数,隐藏层的每一个结点參数(1*192 = 8*8*3, 3为通道数)就相应一个卷积核。
然后将训练得到的卷积核用于大图片上(64*64*3).
习题答案
cnnConvolve.m
function convolvedFeatures = cnnConvolve(patchDim, numFeatures, images, W, b, ZCAWhite, meanPatch) % patcheDim =8 numFeatures = hidden images
%cnnConvolve Returns the convolution of the features given by W and b with
%the given images
%
% Parameters:
% patchDim - patch (feature) dimension
% numFeatures - number of features
% images - large images to convolve with, matrix in the form
% images(r, c, channel, image number)
% W, b - W, b for features from the sparse autoencoder
% ZCAWhite, meanPatch - ZCAWhitening and meanPatch matrices used for
% preprocessing
%
% Returns:
% convolvedFeatures - matrix of convolved features in the form
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol) numImages = size(images, 4);
imageDim = size(images, 1); %% = 64
imageChannels = size(images, 3); convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1); % Instructions:
% Convolve every feature with every large image here to produce the
% numFeatures x numImages x (imageDim - patchDim + 1) x (imageDim - patchDim + 1)
% matrix convolvedFeatures, such that
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol) is the
% value of the convolved featureNum feature for the imageNum image over
% the region (imageRow, imageCol) to (imageRow + patchDim - 1, imageCol + patchDim - 1)
%
% Expected running times:
% Convolving with 100 images should take less than 3 minutes
% Convolving with 5000 images should take around an hour
% (So to save time when testing, you should convolve with less images, as
% described earlier) % -------------------- YOUR CODE HERE --------------------
% Precompute the matrices that will be used during the convolution. Recall
% that you need to take into account the whitening and mean subtraction
% steps WT = W*ZCAWhite; % 能够看exercise中的推导
b_mean = b - WT * meanPatch; % --------------------------------------------------------
patchSize = patchDim * patchDim; convolvedFeatures = zeros(numFeatures, numImages, imageDim - patchDim + 1, imageDim - patchDim + 1);
for imageNum = 1:numImages
for featureNum = 1:numFeatures % convolution of image with feature matrix for each channel
convolvedImage = zeros(imageDim - patchDim + 1, imageDim - patchDim + 1);
for channel = 1:imageChannels % Obtain the feature (patchDim x patchDim) needed during the convolution
% ---- YOUR CODE HERE ----
feature = zeros(8,8); % You should replace this
offset = (channel -1) * patchSize;
feature = reshape(WT(featureNum, offset+1 : offset+patchSize), patchDim, patchDim); % ------------------------ % Flip the feature matrix because of the definition of convolution, as explained later
feature = flipud(fliplr(squeeze(feature))); % Obtain the image
im = squeeze(images(:, :, channel, imageNum)); % Convolve "feature" with "im", adding the result to convolvedImage
% be sure to do a 'valid' convolution
% ---- YOUR CODE HERE ----
convolvedoneChannel = conv2(im, feature, 'valid'); %卷积操作
convolvedImage = convolvedImage + convolvedoneChannel; %三通道相加 % ------------------------ end % Subtract the bias unit (correcting for the mean subtraction as well)
% Then, apply the sigmoid function to get the hidden activation
% ---- YOUR CODE HERE ----
convolvedIamge = sigmoid(convolvedImage + b_mean(featureNum)); %最后的取值为sigmoid函数得到的结果 % ------------------------ % The convolved feature is the sum of the convolved values for all channels
convolvedFeatures(featureNum, imageNum, :, :) = convolvedImage;
end
end end function sigm = sigmoid(x)
sigm = 1./(1+exp(-x));
end
cnnPool.m
function pooledFeatures = cnnPool(poolDim, convolvedFeatures)
%cnnPool Pools the given convolved features
%
% Parameters:
% poolDim - dimension of pooling region
% convolvedFeatures - convolved features to pool (as given by cnnConvolve)
% convolvedFeatures(featureNum, imageNum, imageRow, imageCol)
%
% Returns:
% pooledFeatures - matrix of pooled features in the form
% pooledFeatures(featureNum, imageNum, poolRow, poolCol)
% numImages = size(convolvedFeatures, 2);
numFeatures = size(convolvedFeatures, 1);
convolvedDim = size(convolvedFeatures, 3); resultDim = floor(convolvedDim / poolDim);
pooledFeatures = zeros(numFeatures, numImages, resultDim, resultDim); % -------------------- YOUR CODE HERE --------------------
% Instructions:
% Now pool the convolved features in regions of poolDim x poolDim,
% to obtain the
% numFeatures x numImages x (convolvedDim/poolDim) x (convolvedDim/poolDim)
% matrix pooledFeatures, such that
% pooledFeatures(featureNum, imageNum, poolRow, poolCol) is the
% value of the featureNum feature for the imageNum image pooled over the
% corresponding (poolRow, poolCol) pooling region
% (see http://ufldl/wiki/index.php/Pooling )
%
% Use mean pooling here.
% -------------------- YOUR CODE HERE -------------------- for imageNum = 1:numImages
for featureNum = 1:numFeatures
for poolRow = 1:resultDim
offsetRow = 1+(poolRow-1)*poolDim;
for poolCol = 1:resultDim
offsetCol = 1 + (poolCol-1)*poolDim;
patch = convolvedFeatures(featureNum, imageNum, offsetRow:offsetRow+poolDim-1, offsetCol:offsetCol+poolDim-1);
pooledFeatures(featureNum, imageNum, poolRow, poolCol) = mean(patch(:));
end
end
end
end end
cnnExercise.m
%% CS294A/CS294W Convolutional Neural Networks Exercise % Instructions
% ------------
%
% This file contains code that helps you get started on the
% convolutional neural networks exercise. In this exercise, you will only
% need to modify cnnConvolve.m and cnnPool.m. You will not need to modify
% this file. %%======================================================================
%% STEP 0: Initialization
% Here we initialize some parameters used for the exercise. imageDim = 64; % image dimension
imageChannels = 3; % number of channels (rgb, so 3) patchDim = 8; % patch dimension
numPatches = 50000; % number of patches visibleSize = patchDim * patchDim * imageChannels; % number of input units
outputSize = visibleSize; % number of output units
hiddenSize = 400; % number of hidden units epsilon = 0.1; % epsilon for ZCA whitening poolDim = 19; % dimension of pooling region %%======================================================================
%% STEP 1: Train a sparse autoencoder (with a linear decoder) to learn
% features from color patches. If you have completed the linear decoder
% execise, use the features that you have obtained from that exercise,
% loading them into optTheta. Recall that we have to keep around the
% parameters used in whitening (i.e., the ZCA whitening matrix and the
% meanPatch) % --------------------------- YOUR CODE HERE --------------------------
% Train the sparse autoencoder and fill the following variables with
% the optimal parameters: optTheta = zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);
ZCAWhite = zeros(visibleSize, visibleSize);
meanPatch = zeros(visibleSize, 1); load STL10Features.mat; % -------------------------------------------------------------------- % Display and check to see that the features look good
W = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize);
b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); displayColorNetwork( (W*ZCAWhite)'); %%======================================================================
%% STEP 2: Implement and test convolution and pooling
% In this step, you will implement convolution and pooling, and test them
% on a small part of the data set to ensure that you have implemented
% these two functions correctly. In the next step, you will actually
% convolve and pool the features with the STL10 images. %% STEP 2a: Implement convolution
% Implement convolution in the function cnnConvolve in cnnConvolve.m % Note that we have to preprocess the images in the exact same way
% we preprocessed the patches before we can obtain the feature activations. load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels %% Use only the first 8 images for testing
convImages = trainImages(:, :, :, 1:8); % NOTE: Implement cnnConvolve in cnnConvolve.m first!
convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch); %% STEP 2b: Checking your convolution
% To ensure that you have convolved the features correctly, we have
% provided some code to compare the results of your convolution with
% activations from the sparse autoencoder % For 1000 random points
for i = 1:1000
featureNum = randi([1, hiddenSize]);
imageNum = randi([1, 8]);
imageRow = randi([1, imageDim - patchDim + 1]);
imageCol = randi([1, imageDim - patchDim + 1]); patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum);
patch = patch(:);
patch = patch - meanPatch;
patch = ZCAWhite * patch; features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol)) > 1e-9
fprintf('Convolved feature does not match activation from autoencoder\n');
fprintf('Feature Number : %d\n', featureNum);
fprintf('Image Number : %d\n', imageNum);
fprintf('Image Row : %d\n', imageRow);
fprintf('Image Column : %d\n', imageCol);
fprintf('Convolved feature : %0.5f\n', convolvedFeatures(featureNum, imageNum, imageRow, imageCol));
fprintf('Sparse AE feature : %0.5f\n', features(featureNum, 1));
error('Convolved feature does not match activation from autoencoder');
end
end disp('Congratulations! Your convolution code passed the test.'); %% STEP 2c: Implement pooling
% Implement pooling in the function cnnPool in cnnPool.m % NOTE: Implement cnnPool in cnnPool.m first!
pooledFeatures = cnnPool(poolDim, convolvedFeatures); %% STEP 2d: Checking your pooling
% To ensure that you have implemented pooling, we will use your pooling
% function to pool over a test matrix and check the results. testMatrix = reshape(1:64, 8, 8);
expectedMatrix = [mean(mean(testMatrix(1:4, 1:4))) mean(mean(testMatrix(1:4, 5:8))); ...
mean(mean(testMatrix(5:8, 1:4))) mean(mean(testMatrix(5:8, 5:8))); ]; testMatrix = reshape(testMatrix, 1, 1, 8, 8); pooledFeatures = squeeze(cnnPool(4, testMatrix)); if ~isequal(pooledFeatures, expectedMatrix)
disp('Pooling incorrect');
disp('Expected');
disp(expectedMatrix);
disp('Got');
disp(pooledFeatures);
else
disp('Congratulations! Your pooling code passed the test.');
end %%======================================================================
%% STEP 3: Convolve and pool with the dataset
% In this step, you will convolve each of the features you learned with
% the full large images to obtain the convolved features. You will then
% pool the convolved features to obtain the pooled features for
% classification.
%
% Because the convolved features matrix is very large, we will do the
% convolution and pooling 50 features at a time to avoid running out of
% memory. Reduce this number if necessary stepSize = 50;
assert(mod(hiddenSize, stepSize) == 0, 'stepSize should divide hiddenSize'); load stlTrainSubset.mat % loads numTrainImages, trainImages, trainLabels
load stlTestSubset.mat % loads numTestImages, testImages, testLabels pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, ...
floor((imageDim - patchDim + 1) / poolDim), ...
floor((imageDim - patchDim + 1) / poolDim) );
pooledFeaturesTest = zeros(hiddenSize, numTestImages, ...
floor((imageDim - patchDim + 1) / poolDim), ...
floor((imageDim - patchDim + 1) / poolDim) ); tic(); for convPart = 1:(hiddenSize / stepSize) featureStart = (convPart - 1) * stepSize + 1;
featureEnd = convPart * stepSize; fprintf('Step %d: features %d to %d\n', convPart, featureStart, featureEnd);
Wt = W(featureStart:featureEnd, :);
bt = b(featureStart:featureEnd); fprintf('Convolving and pooling train images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
trainImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTrain(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc();
clear convolvedFeaturesThis pooledFeaturesThis; fprintf('Convolving and pooling test images\n');
convolvedFeaturesThis = cnnConvolve(patchDim, stepSize, ...
testImages, Wt, bt, ZCAWhite, meanPatch);
pooledFeaturesThis = cnnPool(poolDim, convolvedFeaturesThis);
pooledFeaturesTest(featureStart:featureEnd, :, :, :) = pooledFeaturesThis;
toc(); clear convolvedFeaturesThis pooledFeaturesThis; end % You might want to save the pooled features since convolution and pooling takes a long time
save('cnnPooledFeatures.mat', 'pooledFeaturesTrain', 'pooledFeaturesTest');
toc(); %%======================================================================
%% STEP 4: Use pooled features for classification
% Now, you will use your pooled features to train a softmax classifier,
% using softmaxTrain from the softmax exercise.
% Training the softmax classifer for 1000 iterations should take less than
% 10 minutes. % Add the path to your softmax solution, if necessary
% addpath /path/to/solution/ % Setup parameters for softmax
softmaxLambda = 1e-4;
numClasses = 4;
% Reshape the pooledFeatures to form an input vector for softmax
softmaxX = permute(pooledFeaturesTrain, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTrain) / numTrainImages,...
numTrainImages);
softmaxY = trainLabels; options = struct;
options.maxIter = 200;
softmaxModel = softmaxTrain(numel(pooledFeaturesTrain) / numTrainImages,...
numClasses, softmaxLambda, softmaxX, softmaxY, options); %%======================================================================
%% STEP 5: Test classifer
% Now you will test your trained classifer against the test images softmaxX = permute(pooledFeaturesTest, [1 3 4 2]);
softmaxX = reshape(softmaxX, numel(pooledFeaturesTest) / numTestImages, numTestImages);
softmaxY = testLabels; [pred] = softmaxPredict(softmaxModel, softmaxX);
acc = (pred(:) == softmaxY(:));
acc = sum(acc) / size(acc, 1);
fprintf('Accuracy: %2.3f%%\n', acc * 100); % You should expect to get an accuracy of around 80% on the test images.
终于得到的准确率为80.406%
UFLDL教程笔记及练习答案五(自编码线性解码器与处理大型图像**卷积与池化)的更多相关文章
- UFLDL教程笔记及练习答案二(预处理:主成分分析和白化)
首先将本节主要内容记录下来.然后给出课后习题的答案. 笔记: :首先我想推导用SVD求解PCA的合理性. PCA原理:如果样本数据X∈Rm×n.当中m是样本数量,n是样本的维数.PCA降维的目的就是为 ...
- UFLDL教程笔记及练习答案三(Softmax回归与自我学习***)
:softmax回归 当p(y|x,theta)满足多项式分布,通过GLM对其进行建模就能得到htheta(x)关于theta的函数,将其称为softmax回归. 教程中已经给了cost及gradie ...
- UFLDL深度学习笔记 (五)自编码线性解码器
UFLDL深度学习笔记 (五)自编码线性解码器 1. 基本问题 在第一篇 UFLDL深度学习笔记 (一)基本知识与稀疏自编码中讨论了激活函数为\(sigmoid\)函数的系数自编码网络,本文要讨论&q ...
- UFLDL 教程三总结与答案
主成分分析(PCA)是一种能够极大提升无监督特征学习速度的数据降维算法.更重要的是,理解PCA算法,对实现白化算法有很大的帮助,很多算法都先用白化算法作预处理步骤.这里以处理自然图像为例作解释. 1. ...
- Deep Learning 19_深度学习UFLDL教程:Convolutional Neural Network_Exercise(斯坦福大学深度学习教程)
理论知识:Optimization: Stochastic Gradient Descent和Convolutional Neural Network CNN卷积神经网络推导和实现.Deep lear ...
- Deep Learning 10_深度学习UFLDL教程:Convolution and Pooling_exercise(斯坦福大学深度学习教程)
前言 理论知识:UFLDL教程和http://www.cnblogs.com/tornadomeet/archive/2013/04/09/3009830.html 实验环境:win7, matlab ...
- ufldl学习笔记和编程作业:Feature Extraction Using Convolution,Pooling(卷积和汇集特征提取)
ufldl学习笔记与编程作业:Feature Extraction Using Convolution,Pooling(卷积和池化抽取特征) ufldl出了新教程,感觉比之前的好,从基础讲起.系统清晰 ...
- Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)
前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...
- Deep Learning 7_深度学习UFLDL教程:Self-Taught Learning_Exercise(斯坦福大学深度学习教程)
前言 理论知识:自我学习 练习环境:win7, matlab2015b,16G内存,2T硬盘 练习内容及步骤:Exercise:Self-Taught Learning.具体如下: 一是用29404个 ...
随机推荐
- hdoj---Rescue
Rescue Time Limit : 2000/1000ms (Java/Other) Memory Limit : 65536/32768K (Java/Other) Total Submis ...
- 乐字节-Java8核心特性实战-接口默认方法
JAVA8已经发布很久,是自java5(2004年发布)之后Oracle发布的最重要的一个版本.其中包括语言.编译器.库.工具和JVM等诸多方面的新特性,对于国内外互联网公司来说,Java8是以后技术 ...
- B - Beautiful Year
Problem description It seems like the year of 2013 came only yesterday. Do you know a curious fact? ...
- HTML与CCS(十一)
1.1 HTML介绍 1.1.1 Web服务本质 import socket sk = socket.socket() sk.bind(("127.0.0.1", 8080)) s ...
- BZOJ4517: [Sdoi2016]排列计数(组合数+错位排列)
Time Limit: 60 Sec Memory Limit: 128 MBSubmit: 1626 Solved: 994[Submit][Status][Discuss] Descripti ...
- Matplotlib库常用函数大全
Python之Matplotlib库常用函数大全(含注释) plt.savefig(‘test’, dpi = 600) :将绘制的图画保存成png格式,命名为 test plt.ylabel(‘Gr ...
- Android 解决toolbar标题不显示问题
问题原因:toolbar的兼容性有问题 解决办法: setSupportActionBar(toolbar); toolbar使用步骤: 1.编写menu.xml 为了保持兼容需要这样写: andro ...
- Android 关于Toolbar和FragmentActivity的问题
今天在工作中遇到用Fragment搭Tab框架时,FragmentActivity无法使用Toolbar的问题.查了许多资料,其实AppComponent继承自FragmentActivity,所以A ...
- 极光推送设置标签和别名无效的解决办法:JPush设置别名不走成功回调
极光推送设置标签和别名无效的解决办法 JPush设置别名不走成功回调的解决办法 http://www.cnblogs.com/chenqitao/p/5506023.html 主要是网络加载过快导致的 ...
- Findbugs分析工具介绍
Findbugs是一个静态分析工具,它检查类或者JAR 文件,将字节码与一组缺陷模式进行对比以发现可能的问题.Findbugs自带检测器,其中有60余种Bad practice,80余种Correct ...