Reading lists for new LISA students(转)
Research in General
Basics of machine learning
Basics of deep learning
Practical recommendations for gradient-based training of deep architectures
Quick’n’dirty introduction to deep learning: Advances in Deep Learning
Contractive auto-encoders: Explicit invariance during feature extraction
An Analysis of Single Layer Networks in Unsupervised Feature Learning
The importance of Encoding Versus Training With Sparse Coding and Vector Quantization
Feedforward nets
“Improving Neural Nets with Dropout” by Nitish Srivastava
“What is the best multi-stage architecture for object recognition?”
MCMC
Radford Neal’s Review Paper (old but still very comprehensive)
Restricted Boltzmann Machines
Unsupervised learning of distributions of binary vectors using 2-layer networks
Training restricted Boltzmann machines using approximations to the likelihood gradient
Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machine
Enhanced Gradient for Training Restricted Boltzmann Machines
Using fast weights to improve persistent contrastive divergence
Training Products of Experts by Minimizing Contrastive Divergence
Boltzmann Machines
Deep Boltzmann Machines (Salakhutdinov & Hinton)
A Two-stage Pretraining Algorithm for Deep Boltzmann Machines
Regularized Auto-Encoders
Regularization
Stochastic Nets & GSNs
Others
Slow, Decorrelated Features for Pretraining Complex Cell-like Networks
What Regularized Auto-Encoders Learn from the Data Generating Distribution
Recurrent Nets
Learning long-term dependencies with gradient descent is difficult
Learning recurrent neural networks with Hessian-free optimization
On the importance of momentum and initialization in deep learning,
Long short-term memory (Hochreiter & Schmidhuber)
Long Short-Term Memory in Echo State Networks: Details of a Simulation Study
The "echo state" approach to analysing and training recurrent neural networks
Backpropagation-Decorrelation: online recurrent learning with O(N) complexity
New results on recurrent network training:Unifying the algorithms and accelerating convergence
Convolutional Nets
ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, NIPS 2012.
Optimization issues with DL
Knowledge Matters: Importance of Prior Information for Optimization
Practical recommendations for gradient-based training of deep architectures
Hessian Free
Natural Gradient (TONGA)
NLP + DL
Distributed Representations of Words and Phrases and their Compositionality
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection
CV+RBM
CV + DL
Scaling Up
DL + Reinforcement learning
Graphical Models Background
An Introduction to Graphical Models (Mike Jordan, brief course notes)
A View of the EM Algorithm that Justifies Incremental, Sparse and Other Variants (Neal & Hinton, important paper to the modern understanding of Expectation-Maximization)
A Unifying Review of Linear Gaussian Models (Roweis & Ghahramani, ties together PCA, factor analysis, hidden Markov models, Gaussian mixtures, k-means, linear dynamical systems)
An Introduction to Variational Methods for Graphical Models (Jordan et al, mean-field, etc.)
Writing
Software documentation
Python, Theano, Pylearn2, Linux (bash) (at least the 5 first sections), git (5 first sections), github/contributing to it (Theano doc), vim tutorial or emacs tutorial
Software lists of built-in commands/functions
Other Software stuff to know about:
screen
ssh
ipython
matplotlib
Reading lists for new LISA students(转)的更多相关文章
- Reading Lists
* Non-academic 1. Slowing Down to the Speed of Life, by Richard Carlson and Joseph Bailey.2. Your Mo ...
- deep learning 的综述
从13年11月初开始接触DL,奈何boss忙or 各种问题,对DL理解没有CSDN大神 比如 zouxy09等 深刻,主要是自己觉得没啥进展,感觉荒废时日(丢脸啊,这么久....)开始开文,即为记录自 ...
- 深度学习阅读列表 Deep Learning Reading List
Reading List List of reading lists and survey papers: Books Deep Learning, Yoshua Bengio, Ian Goodfe ...
- Reading With Purpose: A grand experiment
Reading With Purpose: A grand experiment This is the preface to a set of notes I'm writing for a sem ...
- Deep Learning 和 Knowledge Graph howto
领军大家: Geoffrey E. Hinton http://www.cs.toronto.edu/~hinton/ 阅读列表: reading lists and survey papers fo ...
- Courses on Turbulence
Courses on Turbulence Table of Contents 1. Lecture 1.1. UIUC Renewable energy and turbulent environm ...
- The Ph.D. Grind
The Ph.D. Grind A Ph.D. Student Memoir Summary The Ph.D. Grind, a 122-page e-book, is the first know ...
- QuantStart量化交易文集
Over the last seven years more than 200 quantitative finance articles have been written by members o ...
- Teen Readers【青少年读者】
Teen Readers Teens and younger children are reading a lot less for fun, according to a Common Sense ...
随机推荐
- HDU 2094 产生冠军 dfs加map容器
解题报告:有一群人在打乒乓球比赛,需要在这一群人里面选出一个冠军,现在规定,若a赢了b,b又赢了c那么如果a与c没有比赛的话,就默认a赢了c,而如果c赢了a的话,则这三个人里面选不出冠军,还有就是如果 ...
- opencv的基本数据结构(一)(转)
从2001年以来,opencv的函数库一直是基于C接口构建的,因此在opencv1.0版本中,一般使用IplImage的C结构体在内存中存储图像,因此,我们在很多较经典的书籍或者开源项目中依然可见Ip ...
- sklearn评估模型的方法
一.acc.recall.F1.混淆矩阵.分类综合报告 1.准确率 第一种方式:accuracy_score # 准确率import numpy as np from sklearn.metrics ...
- CSS 实现单边阴影
box-shadow: 0px -15px 10px -15px #111; 五个值分别为:x y blur spread color 将 spread 设置成 blur 的负值即可 这种只适用于 o ...
- perl6 HTTP::UserAgent发送post
use HTTP::UserAgent; my $ua = HTTP::UserAgent.new; say 'All method:'; say $ua.^methods; my %data = : ...
- 编译器是如何实现32位整型的常量整数除法优化的?[C/C++]
引子 在我之前的一篇文章[ ThoughtWorks代码挑战——FizzBuzzWhizz游戏 通用高速版(C/C++ & C#) ]里曾经提到过编译器在处理除数为常数的除法时,是有优化的,今 ...
- nodejs 接收上传的图片
1.nodejs接收上传的图片主要是使用formidable模块,服务器是使用的express搭建. 引入formidable var formidable = require('./node_mod ...
- 使用Scrapy命令行工具【导出JSON文件】时编码设置
Windows 10家庭中文版,Python 3.6.4,virtualenv 16.0.0,Scrapy 1.5.0, 使用scrapy命令行工具建立了爬虫项目(startproject),并使用s ...
- SNMP相关命令
SNMP的相关命令使用方法: snmpdelta 一直监视SNMP变量中的变化 linux:~ # snmpdelta -c public -v 1 -Cs -CT localhost IF-MIB: ...
- python_selenium自动化测试框架
设计思路 本文整理归纳以往的工作中用到的东西,现汇总成基础测试框架提供分享. 框架采用python3 + selenium3 + PO + yaml + ddt + unittest等技术编写成基础测 ...