Training Deep Neural Networks
http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html //转载于
Training Deep Neural Networks
Tutorials
Popular Training Approaches of DNNs — A Quick Overview
Activation functions
Rectified linear units improve restricted boltzmann machines (ReLU)
Rectifier Nonlinearities Improve Neural Network Acoustic Models (leaky-ReLU, aka LReLU)
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)
- keywords: PReLU, Caffe “msra” weights initilization
- arXiv: http://arxiv.org/abs/1502.01852
Empirical Evaluation of Rectified Activations in Convolutional Network (ReLU/LReLU/PReLU/RReLU)
Deep Learning with S-shaped Rectified Linear Activation Units (SReLU)
Parametric Activation Pools greatly increase performance and consistency in ConvNets
Noisy Activation Functions
Weights Initialization
An Explanation of Xavier Initialization
Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?
All you need is a good init
Data-dependent Initializations of Convolutional Neural Networks
What are good initial weights in a neural network?
- stackexchange: http://stats.stackexchange.com/questions/47590/what-are-good-initial-weights-in-a-neural-network
RandomOut: Using a convolutional gradient norm to win The Filter Lottery
Batch Normalization
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift(ImageNet top-5 error: 4.82%)
- arXiv: http://arxiv.org/abs/1502.03167
- blog: https://standardfrancis.wordpress.com/2015/04/16/batch-normalization/
- notes: http://blog.csdn.net/happynear/article/details/44238541
Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks
- arxiv: http://arxiv.org/abs/1602.07868
- github(Lasagne): https://github.com/TimSalimans/weight_norm
- notes: http://www.erogol.com/my-notes-weight-normalization/
Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Loss Function
The Loss Surfaces of Multilayer Networks
Optimization Methods
On Optimization Methods for Deep Learning
On the importance of initialization and momentum in deep learning
Invariant backpropagation: how to train a transformation-invariant neural network
A practical theory for designing very deep convolutional neural network
- kaggle: https://www.kaggle.com/c/datasciencebowl/forums/t/13166/happy-lantern-festival-report-and-code/69284
- paper: https://kaggle2.blob.core.windows.net/forum-message-attachments/69182/2287/A%20practical%20theory%20for%20designing%20very%20deep%20convolutional%20neural%20networks.pdf?sv=2012-02-12&se=2015-12-05T15%3A40%3A02Z&sr=b&sp=r&sig=kfBQKduA1pDtu837Y9Iqyrp2VYItTV0HCgOeOok9E3E%3D
- slides: http://vdisk.weibo.com/s/3nFsznjLKn
Stochastic Optimization Techniques
- intro: SGD/Momentum/NAG/Adagrad/RMSProp/Adadelta/Adam/ESGD/Adasecant/vSGD/Rprop
- blog: http://colinraffel.com/wiki/stochastic_optimization_techniques
Alec Radford’s animations for optimization algorithms
http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html
Faster Asynchronous SGD (FASGD)
An overview of gradient descent optimization algorithms (★★★★★)

Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters
Writing fast asynchronous SGD/AdaGrad with RcppParallel
Regularization
DisturbLabel: Regularizing CNN on the Loss Layer [University of California & MSR] (2016)
- intro: “an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration”
- paper: http://research.microsoft.com/en-us/um/people/jingdw/pubs/cvpr16-disturblabel.pdf
Dropout
Improving neural networks by preventing co-adaptation of feature detectors (Dropout)
Regularization of Neural Networks using DropConnect
- homepage: http://cs.nyu.edu/~wanli/dropc/
- gitxiv: http://gitxiv.com/posts/rJucpiQiDhQ7HkZoX/regularization-of-neural-networks-using-dropconnect
- github: https://github.com/iassael/torch-dropconnect
Regularizing neural networks with dropout and with DropConnect
Fast dropout training
- paper: http://jmlr.org/proceedings/papers/v28/wang13a.pdf
- github: https://github.com/sidaw/fastdropout
Dropout as data augmentation
- paper: http://arxiv.org/abs/1506.08700
- notes: https://www.evernote.com/shard/s189/sh/ef0c3302-21a4-40d7-b8b4-1c65b8ebb1c9/24ff553fcfb70a27d61ff003df75b5a9
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks
Improved Dropout for Shallow and Deep Learning
Gradient Descent
Fitting a model via closed-form equations vs. Gradient Descent vs Stochastic Gradient Descent vs Mini-Batch Learning. What is the difference?(Normal Equations vs. GD vs. SGD vs. MB-GD)
http://sebastianraschka.com/faq/docs/closed-form-vs-gd.html
An Introduction to Gradient Descent in Python
Train faster, generalize better: Stability of stochastic gradient descent
A Variational Analysis of Stochastic Gradient Algorithms
The vanishing gradient problem: Oh no — an obstacle to deep learning!
Gradient Descent For Machine Learning
http://machinelearningmastery.com/gradient-descent-for-machine-learning/
Revisiting Distributed Synchronous SGD
Accelerate Training
Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices
Image Data Augmentation
DataAugmentation ver1.0: Image data augmentation tool for training of image recognition algorithm
Caffe-Data-Augmentation: a branc caffe with feature of Data Augmentation using a configurable stochastic combination of 7 data augmentation techniques
Papers
Scalable and Sustainable Deep Learning via Randomized Hashing
Tools
pastalog: Simple, realtime visualization of neural network training performance

torch-pastalog: A Torch interface for pastalog - simple, realtime visualization of neural network training performance
Training Deep Neural Networks的更多相关文章
- Training (deep) Neural Networks Part: 1
Training (deep) Neural Networks Part: 1 Nowadays training deep learning models have become extremely ...
- CVPR 2018paper: DeepDefense: Training Deep Neural Networks with Improved Robustness第一讲
前言:好久不见了,最近一直瞎忙活,博客好久都没有更新了,表示道歉.希望大家在新的一年中工作顺利,学业进步,共勉! 今天我们介绍深度神经网络的缺点:无论模型有多深,无论是卷积还是RNN,都有的问题:以图 ...
- 论文翻译:BinaryConnect: Training Deep Neural Networks with binary weights during propagations
目录 摘要 1.引言 2.BinaryConnect 2.1 +1 or -1 2.2确定性与随机性二值化 2.3 Propagations vs updates 2.4 Clipping 2.5 A ...
- 论文翻译:BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1
目录 摘要 引言 1.BinaryNet 符号函数 梯度计算和累积 通过离散化传播梯度 一些有用的成分 算法1 使用BinaryNet训练DNN 算法2 批量标准化转换(Ioffe和Szegedy,2 ...
- 为什么深度神经网络难以训练Why are deep neural networks hard to train?
Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...
- This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem
The unstable gradient problem: The fundamental problem here isn't so much the vanishing gradient pro ...
- [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
About this Course This course will teach you the "magic" of getting deep learning to work ...
- [Box] Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint
目录 概 主要内容 LSGD Box 初始化 Box for Resnet 代码 Cyr E C, Gulian M, Patel R G, et al. Robust Training and In ...
- On Explainability of Deep Neural Networks
On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...
随机推荐
- 【MySQL】主备复制
复制对于mysql的重要性不言而喻,mysql集群的负载均衡,读写分离和高可用都是基于复制实现.下文主要从4个方面展开,mysql的异步复制,半同步复制和并行复制,最后会简单聊下第三方复制工具.由于生 ...
- b/s开发者的困境
作为一个由内而外都贴着web开发者标签的开发人员,注定会被大多数c/s开发者和通讯接口开发者所鄙视. 在他们看来,从事web开发,肯定很少用到算法,设计模式,多线程等等.我只能说,就我个人的经历来说, ...
- 【Python与机器学习】:利用Keras进行多类分类
多类分类问题本质上可以分解为多个二分类问题,而解决二分类问题的方法有很多.这里我们利用Keras机器学习框架中的ANN(artificial neural network)来解决多分类问题.这里我们采 ...
- 透视校正插值(Perspective-Correct Interpolation)
在渲染器光栅化每个三角形的过程中,需要对根据顶点属性对三角形进行扫描线插值.此时由于投影面上顶点的2D坐标与顶点属性不成线性关系,因此是不能简单地使用线性插值来计算顶点属性的. 此时应当利用透视校正插 ...
- Dapper ORM 用法—Net下无敌的ORM
假如你喜欢原生的Sql语句,又喜欢ORM的简单,那你一定会喜欢上Dapper这款ROM.点击下载Dapper的优势:1,Dapper是一个轻型的ORM类.代码就一个SqlMapper.cs文件,编译后 ...
- 在win下开发c++代码, 推荐一个工具dev c++
官方地址:http://bloodshed-dev-c.en.softonic.com/
- 10. windows与linux文件共享
1. 关闭防火墙 /etc/init.d/iptables stop 2. C:\Users\cfm>ping 192.168.232.131 正在 Ping 192.168.232.131 具 ...
- finder文件目录跳转快捷键
finder文件目录跳转快捷键 command+shift+G
- dedecms代码研究七
简单回顾一下,前面我们首页动态文件index.PHP调用arc.partview.class.php来初始化首页模板 arc.partview.class.php通过SetTemplet调用DedeT ...
- HTTP的GET/POST细节
HTTP的GET/POST方式有何区别?这是一个老生常谈的问题,但老生常谈的问题往往有一些让人误解的结论.本文将带您浅尝HTTP协议,在了 解HTTP协议的同时将会展示许多被人们忽视的内容.在掌握了H ...