从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久。。。。)开始开文,即为记录自己是怎么一步一个逗比的走过的路的,也为了自己思维更有条理。请看客,轻拍,(如果有错,我会立马改正,谢谢大家的指正。==!其实有人看没人看都是个问题。哈哈)

推荐 tornadomeet 的博客园学习资料

http://www.cnblogs.com/tornadomeet/category/497607.html

zouxy09 的csdn学习资料

http://blog.csdn.net/zouxy09

sunmenggmail的csdn的DL的paper整理

http://blog.csdn.net/sunmenggmail/article/details/20904867

falao_beiliu的csdn资料

http://blog.csdn.net/mytestmy/article/category/1465487

Rachel-Zhang 浙大DL女神

http://blog.csdn.net/abcjennifer/article/details/7826917

国内的一个DL论坛,刚刚成立,欢迎大家关注。

http://dl.xsoftlab.net/

下面是综述类的文章,暂时就只记得这一些

2009  Learning
Deep Architectures for AI

http://deeplearning.net/reading-list/

2010 Deep Machine Learning – A New Frontier in Artificial Intelligence Research

http://deeplearning.net/reading-list/

2011 An Introduction to Deep Learning

https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2011-4.pdf

2012  Representation
Learning: A Review and New Perspectives

http://deeplearning.net/reading-list/

2012 深度学习研究综述

2014  Deep Learning in Neural Networks: An Overview

http://arxiv.org/abs/1404.7828

2014 Object Detection with Deep Learning CVPR 2014 Tutorial

2014 DEEP LEARNING:METHODS AND APPLICATIONS

微软的邓力大叔,虽然做语音,但是也写了不少的例如综述类的 http://research.microsoft.com/en-us/people/deng/

2014  A
Tutorial Survey of Architectures, Algorithms, and Applications for Deep Learning

http://research.microsoft.com/en-us/people/deng/

其实很多比如PPT什么的就很好,比如hinton的,andrew ng的 ,Yann LeCun的,Yoshua Bengio的,从他们的home page上就可以找到很多有用的文章。他们作为大神,没话说,而且难能可贵的 作为老师,他们也出了视频,或者很多对于我们菜鸟的浅显入门的东西。还有吴立德,吴老爷子的教学视频(优酷就有,但是杂音太多)。

http://blog.coursegraph.com/公开课可下载资源汇总  (这里有很全的视频学习资料,比如ng的机器学习,hinton的机器学习,自然语言处理各种。)

读书列表,有http://deeplearning.net/reading-list/列的,也有Yoshua Bengio推荐的书单(有些链接失效的。我这个月才发现这个,如果太老,或者什么请忽略我。)

Reading lists for new LISA students

Research in General

● How to write a great research paper

Basics of machine learning

● http://www.iro.umontreal.ca/~bengioy/DLbook/math.html

● http://www.iro.umontreal.ca/~bengioy/DLbook/ml.html

Basics of deep learning

● http://www.iro.umontreal.ca/~bengioy/DLbook/intro.html

● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html

● Learning deep architectures for AI

● Practical recommendations for gradientbased

training of deep architectures

● Quick’n’dirty introduction to deep learning: Advances in Deep Learning

● A fast learning algorithm for deep belief nets

● Greedy LayerWise

Training of Deep Networks

● Stacked denoising autoencoders: Learning useful representations in a deep network with

a local denoising criterion

● Contractive autoencoders:

Explicit invariance during feature extraction

● Why does unsupervised pretraining

help deep learning?

● An Analysis of Single Layer Networks in Unsupervised Feature Learning

● The importance of Encoding Versus Training With Sparse Coding and Vector

Quantization

● Representation Learning: A Review and New Perspectives

● Deep Learning of Representations: Looking Forward

● Measuring Invariances in Deep Networks

● Neural networks course at USherbrooke [youtube]

Feedforward nets

● http://www.iro.umontreal.ca/~bengioy/DLbook/mlp.html

● “Improving Neural Nets with Dropout” by Nitish Srivastava

● “Deep Sparse Rectifier Neural Networks”

● “What is the best multistage

architecture for object recognition?”

● “Maxout Networks”

MCMC

● Iain Murray’s MLSS slides

● Radford Neal’s Review Paper (old but still very comprehensive)

● Better Mixing via Deep Representations

Restricted Boltzmann Machines

● Unsupervised learning of distributions of binary vectors using 2layer

networks

● A practical guide to training restricted Boltzmann machines

● Training restricted Boltzmann machines using approximations to the likelihood gradient

● Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machine

● How to Center Binary Restricted Boltzmann Machines

● Enhanced Gradient for Training Restricted Boltzmann Machines

● Using fast weights to improve persistent contrastive divergence

● Training Products of Experts by Minimizing Contrastive Divergence

Boltzmann Machines

● Deep Boltzmann Machines (Salakhutdinov & Hinton)

● Multimodal Learning with Deep Boltzmann Machines

● MultiPrediction

Deep Boltzmann Machines

● A Twostage

Pretraining Algorithm for Deep Boltzmann Machines

Regularized Auto-Encoders

● The Manifold Tangent Classifier

Regularization

Stochastic Nets & GSNs

● Estimating or Propagating Gradients Through Stochastic Neurons for Conditional

Computation

● Learning Stochastic Feedforward Neural Networks

● Generalized Denoising AutoEncoders

as Generative Models

● Deep Generative Stochastic Networks Trainable by Backprop

Others

● Slow, Decorrelated Features for Pretraining Complex Celllike

Networks

● What Regularized AutoEncoders

Learn from the Data Generating Distribution

● Generalized Denoising AutoEncoders

as Generative Models

● Why the logistic function?

Recurrent Nets

● Learning longterm

dependencies with gradient descent is difficult

● Advances in Optimizing Recurrent Networks

● Learning recurrent neural networks with Hessianfree

optimization

● On the importance of momentum and initialization in deep learning,

● Long shortterm

memory (Hochreiter & Schmidhuber)

● Generating Sequences With Recurrent Neural Networks

● Long ShortTerm

Memory in Echo State Networks: Details of a Simulation Study

● The "echo state" approach to analysing and training recurrent neural networks

● BackpropagationDecorrelation:

online recurrent learning with O(N) complexity

● New results on recurrent network training:Unifying the algorithms and accelerating

convergence

● Audio Chord Recognition with Recurrent Neural Networks

● Modeling Temporal Dependencies in HighDimensional

Sequences: Application to

Polyphonic Music Generation and Transcription

Convolutional Nets

● http://www.iro.umontreal.ca/~bengioy/DLbook/convnets.html

● Generalization and Network Design Strategies (LeCun)

● ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya

Sutskever, Geoffrey E Hinton, NIPS 2012.

● On Random Weights and Unsupervised Feature Learning

Optimization issues with DL

● Curriculum Learning

● Evolving Culture vs Local Minima

● Knowledge Matters: Importance of Prior Information for Optimization

● Efficient Backprop

● Practical recommendations for gradientbased

training of deep architectures

● Natural Gradient Works Efficiently (Amari 1998)

● Hessian Free

● Natural Gradient (TONGA)

● Revisiting Natural Gradient

NLP + DL

● Natural Language Processing (Almost) from Scratch

● DeViSE: A Deep VisualSemantic

Embedding Model

● Distributed Representations of Words and Phrases and their Compositionality

● Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection

CV+RBM

● Fields of Experts

● What makes a good model of natural images?

● Phone Recognition with the meancovariance

restricted Boltzmann machine

● Unsupervised Models of Images by SpikeandSlab

RBMs

CV + DL

● Imagenet classification with deep convolutional neural networks

● Learning to relate images

Scaling Up

● Large Scale Distributed Deep Networks

● Random search for hyperparameter

optimization

● Practical Bayesian Optimization of Machine Learning Algorithms

DL + Reinforcement learning

● Playing Atari with Deep Reinforcement Learning (paper not officially released yet!)

Graphical Models Background

● An Introduction to Graphical Models (Mike Jordan, brief course notes)

● A View of the EM Algorithm that Justifies Incremental, Sparse and Other Variants (Neal &

Hinton, important paper to the modern understanding of ExpectationMaximization)

● A Unifying Review of Linear Gaussian Models (Roweis & Ghahramani, ties together PCA,

factor analysis, hidden Markov models, Gaussian mixtures, kmeans,

linear dynamical

systems)

● An Introduction to Variational Methods for Graphical Models (Jordan et al, meanfield,

etc.)

Writing

● Writing a great research paper (video of the presentation)

Software documentation

● Python, Theano, Pylearn2, Linux (bash) (at least the 5 first sections), git (5 first sections),

github/contributing to it (Theano doc), vim tutorial or emacs tutorial

Software lists of built-in commands/functions

● Bash commands

● List of Builtin

Python Functions

● vim commands

Other Software stuff to know about:

● screen

● ssh

● ipython

● matplotlib

deep learning 的综述的更多相关文章

  1. Deep Learning论文笔记之(八)Deep Learning最新综述

    Deep Learning论文笔记之(八)Deep Learning最新综述 zouxy09@qq.com http://blog.csdn.net/zouxy09 自己平时看了一些论文,但老感觉看完 ...

  2. Paper List ABOUT Deep Learning

    Deep Learning 方向的部分 Paper ,自用.一 RNN 1 Recurrent neural network based language model RNN用在语言模型上的开山之作 ...

  3. Deep Learning方向的paper

    转载 http://hi.baidu.com/chb_seaok/item/6307c0d0363170e73cc2cb65 个人阅读的Deep Learning方向的paper整理,分了几部分吧,但 ...

  4. 论文学习-深度学习目标检测2014至201901综述-Deep Learning for Generic Object Detection A Survey

    目录 写在前面 目标检测任务与挑战 目标检测方法汇总 基础子问题 基于DCNN的特征表示 主干网络(network backbone) Methods For Improving Object Rep ...

  5. 论文阅读:Face Recognition: From Traditional to Deep Learning Methods 《人脸识别综述:从传统方法到深度学习》

     论文阅读:Face Recognition: From Traditional to Deep Learning Methods  <人脸识别综述:从传统方法到深度学习>     一.引 ...

  6. 个性探测综述阅读笔记——Recent trends in deep learning based personality detection

    目录 abstract 1. introduction 1.1 个性衡量方法 1.2 应用前景 1.3 伦理道德 2. Related works 3. Baseline methods 3.1 文本 ...

  7. (2020行人再识别综述)Person Re-Identification using Deep Learning Networks: A Systematic Review

    目录 1.引言 2.研究方法 2.1本次综述的贡献 2.2综述方法 2.3与现有综述的比较 3.行人再识别基准数据集 3.1基于图像的再识别数据集 3.2基于视频的再识别数据集 4.基于图像的深度再识 ...

  8. Deep Learning综述[下]

    Image understanding with deep convolutional networks 直到2012年ImageNet大赛之前,卷积神经网络一直被主流机器视觉和机器学习社区所遗弃.2 ...

  9. Deep Learning综述[上]

    Deep-Learning-Papers-Reading-Roadmap: [1] LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Dee ...

随机推荐

  1. JAVA NIO中的Channels和Buffers

    前言 Channels和Buffers是JAVA NIO里面比较重要的两个概念,NIO正是基于Channels和Buffers进行数据操作,且数据总是从Channels读取到Buffers,或者从Bu ...

  2. MySQL数据库出现The server quit without updating PID file.

    一.服务器环境 操作系统:CentOS-6.4 服务器环境:集成环境LNMP1.0 二.步骤重现 1.安装LNMP1.0,具体操作方法见这里,安装成功: 2.因个人需求,现将MySQL数据库存放在/d ...

  3. Java Override/Overload

    重写(Override) 重写是子类对父类的允许访问的方法的实现过程进行重新编写!返回值和形参都不能改变.即外壳不变,核心重写! 重写的好处在于子类可以根据需要,定义特定于自己的行为. 也就是说子类能 ...

  4. 【转】ETL增量抽取——通过时间戳方式实现

    这个实验主要思想是在创建数据库表的时候, 通过增加一个额外的字段,也就是时间戳字段, 例如在同步表 tt1 和表 tt2 的时候, 通过检查那个表是最新更新的,那个表就作为新表,而另外的表最为旧表被新 ...

  5. Boost配置

    =================================版权声明================================= 版权声明:本文为博主原创文章 未经许可不得转载  请通过右 ...

  6. Confluent介绍(二)--confluent platform quickstart

    下载 http://www.confluent.io/download,打开后,显示最新版本3.0.0,然后在右边填写信息后,点击Download下载. 之后跳转到下载页面,选择zip 或者 tar都 ...

  7. SQL Server 2008 存储过程,带事务的存储过程(创建存储过程,删除存储过程,修改存储过

    SQL Server 2008 存储过程,带事务的存储过程(创建存储过程,删除存储过程,修改存储过     存储过程 创建存储过程 use pubs --pubs为数据库 go create proc ...

  8. x01.os.15: 看上去很美

    张碧晨在韩国学的不是技巧,而是基本功:气息!声音由气息托着,似真声而不是真声,似假声又不是假声,所以才能在动听的地方唱得更动听.编程也是一样,基本功很重要:内存!所谓的黑客高手,攻击的一大手段,便是利 ...

  9. ctargs使用

    ctargs为源码的变量/对象.结构体/类.函数/接口.宏等产生索引文件,以便快速定位.目前支持41种语言,这里仅以C/C++为例:ctags可以产生c/c++语言所有类型的索引文件,具体如下: -& ...

  10. tools:context=".MainActivity的作用(转)

    android:layout_width="match_parent" android:layout_height="match_parent" android ...