Theories of Deep Learning
https://stats385.github.io/readings
Lecture 1 – Deep Learning Challenge. Is There Theory?
Readings
- Deep Deep Trouble
- Why 2016 is The Global Tipping Point...
- Are AI and ML Killing Analyticals...
- The Dark Secret at The Heart of AI
- AI Robots Learning Racism...
- FaceApp Forced to Pull ‘Racist' Filters...
- Losing a Whole Generation of Young Men to Video Games
Lecture 2 – Overview of Deep Learning From a Practical Point of View
Readings
- Emergence of simple cell
- ImageNet Classification with Deep Convolutional Neural Networks (Alexnet)
- Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG)
- Going Deeper with Convolutions (GoogLeNet)
- Deep Residual Learning for Image Recognition (ResNet)
- Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
- Visualizing and Understanding Convolutional Neural Networks
Blogs
Videos
Lecture 3
Readings
- A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction
- Energy Propagation in Deep Convolutional Neural Networks
- Discrete Deep Feature Extraction: A Theory and New Architectures
- Topology Reduction in Deep Convolutional Feature Extraction Networks
Lecture 4
Readings
- A Probabilistic Framework for Deep Learning
- Semi-Supervised Learning with the Deep Rendering Mixture Model
- A Probabilistic Theory of Deep Learning
Lecture 5
Readings
- Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review
- Learning Functions: When is Deep Better Than Shallow
Lecture 6
Readings
- Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach
- Convolutional Kernel Networks
- Kernel Descriptors for Visual Recognition
- End-to-End Kernel Learning with Supervised Convolutional Kernel Networks
- Learning with Kernels
- Kernel Based Methods for Hypothesis Testing
Lecture 7
Readings
- Geometry of Neural Network Loss Surfaces via Random Matrix Theory
- Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
- Nonlinear random matrix theory for deep learning
Lecture 8
Readings
- Deep Learning without Poor Local Minima
- Topology and Geometry of Half-Rectified Network Optimization
- Convexified Convolutional Neural Networks
- Implicit Regularization in Matrix Factorization
Lecture 9
Readings
- Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position
- Perception as an inference problem
- A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information
Lecture 10
Readings
- Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding
- Convolutional Neural Networks Analyzed via Convolutional Sparse Coding
- Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning
- Convolutional Dictionary Learning via Local Processing
To be discussed and extra
- Emergence of simple cell by Olshausen and Field
- Auto-Encoding Variational Bayes by Kingma and Welling
- Generative Adversarial Networks by Goodfellow et al.
- Understanding Deep Learning Requires Rethinking Generalization by Zhang et al.
- Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? by Giryes et al.
- Robust Large Margin Deep Neural Networks by Sokolic et al.
- Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems by Giryes et al.
- Understanding Trainable Sparse Coding via Matrix Factorization by Moreau and Bruna
- Why are Deep Nets Reversible: A Simple Theory, With Implications for Training by Arora et al.
- Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network by Malgouyres and Landsberg
- Optimal Approximation with Sparse Deep Neural Networks by Bolcskei et al.
- Convolutional Rectifier Networks as Generalized Tensor Decompositions by Cohen and Shashua
- Emergence of Invariance and Disentanglement in Deep Representations by Achille and Soatto
- Deep Learning and the Information Bottleneck Principle by Tishby and Zaslavsky
Theories of Deep Learning的更多相关文章
- (转) Deep Learning in a Nutshell: Reinforcement Learning
Deep Learning in a Nutshell: Reinforcement Learning Share: Posted on September 8, 2016by Tim Dettm ...
- Machine and Deep Learning with Python
Machine and Deep Learning with Python Education Tutorials and courses Supervised learning superstiti ...
- The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near
The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near ...
- Decision Boundaries for Deep Learning and other Machine Learning classifiers
Decision Boundaries for Deep Learning and other Machine Learning classifiers H2O, one of the leading ...
- What are some good books/papers for learning deep learning?
What's the most effective way to get started with deep learning? 29 Answers Yoshua Bengio, ...
- (转)Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspectives
Understanding Memory in Deep Learning Systems: The Neuroscience, Psychology and Technology Perspecti ...
- [C3] Andrew Ng - Neural Networks and Deep Learning
About this Course If you want to break into cutting-edge AI, this course will help you do so. Deep l ...
- Deep learning:五十一(CNN的反向求导及练习)
前言: CNN作为DL中最成功的模型之一,有必要对其更进一步研究它.虽然在前面的博文Stacked CNN简单介绍中有大概介绍过CNN的使用,不过那是有个前提的:CNN中的参数必须已提前学习好.而本文 ...
- 【深度学习Deep Learning】资料大全
最近在学深度学习相关的东西,在网上搜集到了一些不错的资料,现在汇总一下: Free Online Books by Yoshua Bengio, Ian Goodfellow and Aaron C ...
随机推荐
- Windows下 VS2015编译levelDB(nmake)
VS2015编译levelDB Leveldb是一个google实现的非常高效的kv数据库,非常适合嵌入到程序中.如果有简单的key-value数据库需求,而又想使用一个数据库服务的话,levelDB ...
- 3299 有序数组合并求第K大问题
题目描述 Description 给出两个有序数组A和B(从小到大有序),合并两个有序数组后新数组c也有序,询问c数组中第k大的数 假设不计入输入输出复杂度,你能否给出一个O(logN)的方法? 输入 ...
- tomcat,很多时候,可以在服务server.xml中可以实现一些效果
一.--日志 <Valve className="org.apache.catalina.valves.AccessLogValve" directory="log ...
- apache占用80端口,导致nginx启动不成功
把apache干掉,然后重启nginx就可以了
- keras embeding设置初始值的两种方式
随机初始化Embedding from keras.models import Sequential from keras.layers import Embedding import numpy a ...
- 【RS】Local Low-Rank Matrix Approximation - LLORMA :局部低秩矩阵近似
[论文标题]Local Low-Rank Matrix Approximation (icml_2013 ) [论文作者]Joonseok Lee,Seungyeon Kim,Guy Lebanon ...
- Elasticstack 5.1.2 集群日志系统部署及实践
Elasticstack 5.1.2 集群日志系统部署及实践 一.ELK Stack简介 ELK Stack 是Elasticsearch.Logstash.Kibana三个开源软件的组合,在实时数据 ...
- Oracle 12C -- in-database archiving
在同一张表中,通过将row置为inactive状态来实现数据的归档.数据库中,可以对那些inactive row进行压缩优化.在实现归档的同时,应用可以被限制只访问那些active状态的数据.默认情况 ...
- Extending a logical volume in a virtual machine running Red Hat or Cent OS (1006371)
Purpose This article provides steps for extending the root partition residing in a logical volume cr ...
- 【转】Swift 语言的设计错误
Swift 语言的设计错误 在『编程的智慧』一文中,我分析和肯定了 Swift 语言的 optional type 设计,但这并不等于 Swift 语言的整体设计是完美没有问题的.其实 Swift 1 ...