Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks for digit classification stackedAEPredict.m function [pred] = stackedAEPredict(theta, inputSize, hiddenSize, numClasses, netconfig, data) % stackedAEPre…
前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercise: Implement deep networks for digit classification.利用深度网络完成MNIST手写数字数据库中手写数字的识别.即:用6万个已标注数据(即:6万张28*28的图像块(patches)),作为训练数据集,然后把它输入到栈式自编码器中,它的第一层自编码器…
Exercise:Convolution and Pooling 习题链接:Exercise:Convolution and Pooling cnnExercise.m %% CS294A/CS294W Convolutional Neural Networks Exercise % Instructions % ------------ % % This file contains code that helps you get started on the % convolutional n…
Exercise:PCA and Whitening 习题链接:Exercise:PCA and Whitening pca_gen.m %%================================================================ %% Step 0a: Load data % Here we provide the code to load natural image data into x. % x will be a * matrix, where…
Exercise:PCA in 2D 习题的链接:Exercise:PCA in 2D pca_2d.m close all %%================================================================ %% Step : Load data % We have provided the code to load data from pcaData.txt into x. % x * matrix, where the kth column…
Exercise:Sparse Autoencoder 习题的链接:Exercise:Sparse Autoencoder 注意点: 1.训练样本像素值需要归一化. 因为输出层的激活函数是logistic函数,值域(0,1), 如果训练样本每个像素点没有进行归一化,那将无法进行自编码. 2.训练阶段,向量化实现比for循环实现快十倍. 3.最后产生的图片阵列是将W1权值矩阵的转置,每一列作为一张图片. 第i列其实就是最大可能激活第i个隐藏节点的图片xi,再乘以常数因子C(其中C就是W1第i行元素…
Exercise:Softmax Regression 习题的链接:Exercise:Softmax Regression softmaxCost.m function [cost, grad] = softmaxCost(theta, numClasses, inputSize, lambda, data, labels) % numClasses - the number of classes % inputSize - the size N of the input vector % la…
Exercise:Learning color features with Sparse Autoencoders 习题链接:Exercise:Learning color features with Sparse Autoencoders sparseAutoencoderLinearCost.m function [cost,grad,features] = sparseAutoencoderLinearCost(theta, visibleSize, hiddenSize, ... lam…
Exercise:Self-Taught Learning 习题链接:Exercise:Self-Taught Learning feedForwardAutoencoder.m function [activation] = feedForwardAutoencoder(theta, hiddenSize, visibleSize, data) % theta: trained weights from the autoencoder % visibleSize: the number of…
Exercise:Vectorization 习题的链接:Exercise:Vectorization 注意点: MNIST图片的像素点已经经过归一化. 如果再使用Exercise:Sparse Autoencoder中的sampleIMAGES.m进行归一化, 将使得训练得到的可视化权值如下图: 更改train.m的参数设置 visibleSize = *; % number of input units hiddenSize = ; % number of hidden units spar…
Yongchao Xu--[2018]TextField_Learning A Deep Direction Field for Irregular Scene Text Detection 论文 Yongchao Xu--[2018]TextField_Learning A Deep Direction Field for Irregular Scene Text Detection 作者 亮点 提出的TextField方法非常新颖,用点到最近boundary点的向量来区分不同instance…
这个exercise需要完成cnn中的forward pass,cost,error和gradient的计算.需要弄清楚每一层的以上四个步骤的原理,并且要充分利用matlab的矩阵运算.大概把过程总结了一下如下图所示: STEP 1:Implement CNN Objective STEP 1a: Forward Propagation Forward Propagation主要是为了计算输入图片经过神经网络后的输出,这个网络有三层:convolution->pooling->softmax(…
InceptionV1 论文原文:Going deeper with convolutions    中英文对照 InceptionBN 论文原文:Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift   中英文对照 InceptionV2/V3 论文原文:Rethinking the Inception Architecture for Computer Visi…
[论文标题]Convolutional neural network architecture for geometric matching (2017CVPR) [论文作者]Ignacio Rocco ,Relja Arandjelovi´,Josef Sivic [论文链接]Paper (15-pages // Double column) [Abstract] We address the problem of determining correspondences between two…
手写字体识别模型LeNet5诞生于1994年,是最早的卷积神经网络之一.原文地址为Gradient-Based Learning Applied to Document Recognition,感谢网络中各博主的讲解,尤其是该博客,帮助我的理解,感谢. Model详解 C1 6@28×28 S2 6@14×14 C3 16@10×10 S4 16@5×5 C5 120 F6  84 Output  10 Model概览 代码复现 下图就是我们很熟悉的LeNet-5的结构图,LeNet5由7层CN…
终于有了2个月的空闲时间,给自己消化沉淀,希望别有太多的杂事打扰.在很多课程中,我都学过卷积.池化.dropout等基本内容,但目前在脑海中还都是零散的概念,缺乏整体性框架,本系列博客就希望进行一定的归纳和梳理,谋求一个更清晰的思路. ## Outline 卷积 tensorflow-conv 池化 tensorflow-pooling 反向传播 梯度消散和梯度爆炸 ## Notes [卷积(Convolution)] 卷积的目的就是从原始数据中提取出特征,过程是利用卷积核(kernel)按照下…
目标: 怎么训练很深的神经网络 然而过深的神经网络会造成各种问题,梯度消失之类的,导致很难训练 作者利用了类似LSTM的方法,通过增加gate来控制transform前和transform后的数据的比例,称为Highway network 至于为什么会有效...大概和LSTM会有效的原因一样吧. 方法: 首先是普通的神经网络,每一层H从输入x映射到输出y,H通常包含一个仿射变换和一个非线性变换,如下 在这个基础上,highway network添加了两个gate 1)T:trasform gat…
在前文中,我们介绍了LeNet的相关细节,它是由两个卷积层.两个池化层以及两个全链接层组成.卷积都是5*5的模板,stride =1,池化为MAX.整体来说它有三大特点:局部感受野,权值共享和池化.2012年ALex发布了AlexNet,他比LeNet5更深,而且可以学习更复杂的图像高维特征.接下来,我们就将一起学习AlexNet模型. 论文原文: ImageNet Classification with Deep Convolutional Neural Networks 论文翻译:AlexN…
本文为转载,作者:Microstrong0305 来源:CSDN 原文:https://blog.csdn.net/program_developer/article/details/80737724 1. Dropout简介 1.1 Dropout出现的原因 在机器学习的模型中,如果模型的参数太多,而训练样本又太少,训练出来的模型很容易产生过拟合的现象.在训练神经网络的时候经常会遇到过拟合的问题,过拟合具体表现在:模型在训练数据上损失函数较小,预测准确率较高:但是在测试数据上损失函数比较大,预…
记录下,有空研究. http://nlp.stanford.edu/projects/DeepLearningInNaturalLanguageProcessing.shtml http://nlp.stanford.edu/courses/NAACL2013/ Fast and Robust Neural Network Joint Models for Statistical Machine Translation ACL2014的论文列表 http://blog.sina.com.cn/s…
Implement strStr(). Returns the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack. Solution:  class Solution { public: int strStr(string haystack, string needle) { //runtime:4ms int len1=haystack.size(), len…
题目: Implement the following operations of a stack using queues. push(x) -- Push element x onto stack. pop() -- Removes the element on top of the stack. top() -- Get the top element. empty() -- Return whether the stack is empty. Notes: You must use on…
题目: Implement the following operations of a queue using stacks. push(x) -- Push element x to the back of queue. pop() -- Removes the element from in front of queue. peek() -- Get the front element. empty() -- Return whether the queue is empty. Notes:…
Implement strStr() Implement strStr(). Returns a pointer to the first occurrence of needle in haystack, or null if needle is not part of haystack. 解法一:暴力解 class Solution { public: int strStr(string haystack, string needle) { int m = haystack.size();…
Implement Trie (Prefix Tree) Implement a trie with insert, search, and startsWith methods. Note:You may assume that all inputs are consist of lowercase letters a-z. 一个字母代表一个子树,因此为26叉树,end标记表示是否存在以该字母为结尾的字符串. class TrieNode { public: TrieNode* childre…
Deep Learning: A Practitioner's Approach http://www.amazon.com/Deep-Learning-Practitioners-Adam-Gibson/dp/1491914254/ref=sr_1_1?ie=UTF8&qid=1430704761&sr=8-1&keywords=deep+learning…
Implement strStr(). Return the index of the first occurrence of needle in haystack, or -1 if needle is not part of haystack. Example 1: Input: haystack = "hello", needle = "ll" Output: 2 Example 2: Input: haystack = "aaaaa",…
A trie (pronounced as "try") or prefix tree is a tree data structure used to efficiently store and retrieve keys in a dataset of strings. There are various applications of this data structure, such as autocomplete and spellchecker. Implement the…
作者: 负雪明烛 id: fuxuemingzhu 个人博客: http://fuxuemingzhu.cn/ 目录 题目描述 题目大意 解题方法 Python解法 Java解法 日期 [LeetCode] 题目地址:https://leetcode.com/problems/implement-queue-using-stacks/ Total Accepted: 42648 Total Submissions: 125482 Difficulty: Easy 题目描述 Implement t…
作者: 负雪明烛 id: fuxuemingzhu 个人博客: http://fuxuemingzhu.cn/ 目录 题目描述 题目大意 解题方法 日期 题目地址:https://leetcode.com/problems/implement-stack-using-queues/#/description 题目描述 Implement the following operations of a stack using queues. push(x) – Push element x onto…