http://handong1587.github.io/deep_learning/2015/10/09/training-dnn.html  //转载于

Training Deep Neural Networks

 Published: 09 Oct 2015  Category: deep_learning

Tutorials

Popular Training Approaches of DNNs — A Quick Overview

https://medium.com/@asjad/popular-training-approaches-of-dnns-a-quick-overview-26ee37ad7e96#.pqyo039bb

Activation functions

Rectified linear units improve restricted boltzmann machines (ReLU)

Rectifier Nonlinearities Improve Neural Network Acoustic Models (leaky-ReLU, aka LReLU)

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PReLU)

Empirical Evaluation of Rectified Activations in Convolutional Network (ReLU/LReLU/PReLU/RReLU)

Deep Learning with S-shaped Rectified Linear Activation Units (SReLU)

Parametric Activation Pools greatly increase performance and consistency in ConvNets

Noisy Activation Functions

Weights Initialization

An Explanation of Xavier Initialization

Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy?

All you need is a good init

Data-dependent Initializations of Convolutional Neural Networks

What are good initial weights in a neural network?

RandomOut: Using a convolutional gradient norm to win The Filter Lottery

Batch Normalization

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift(ImageNet top-5 error: 4.82%)

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Loss Function

The Loss Surfaces of Multilayer Networks

Optimization Methods

On Optimization Methods for Deep Learning

On the importance of initialization and momentum in deep learning

Invariant backpropagation: how to train a transformation-invariant neural network

A practical theory for designing very deep convolutional neural network

Stochastic Optimization Techniques

Alec Radford’s animations for optimization algorithms

http://www.denizyuret.com/2015/03/alec-radfords-animations-for.html

Faster Asynchronous SGD (FASGD)

An overview of gradient descent optimization algorithms (★★★★★)

Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters

Writing fast asynchronous SGD/AdaGrad with RcppParallel

Regularization

DisturbLabel: Regularizing CNN on the Loss Layer [University of California & MSR] (2016)

Dropout

Improving neural networks by preventing co-adaptation of feature detectors (Dropout)

Regularization of Neural Networks using DropConnect

Regularizing neural networks with dropout and with DropConnect

Fast dropout training

Dropout as data augmentation

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Improved Dropout for Shallow and Deep Learning

Gradient Descent

Fitting a model via closed-form equations vs. Gradient Descent vs Stochastic Gradient Descent vs Mini-Batch Learning. What is the difference?(Normal Equations vs. GD vs. SGD vs. MB-GD)

http://sebastianraschka.com/faq/docs/closed-form-vs-gd.html

An Introduction to Gradient Descent in Python

Train faster, generalize better: Stability of stochastic gradient descent

A Variational Analysis of Stochastic Gradient Algorithms

The vanishing gradient problem: Oh no — an obstacle to deep learning!

Gradient Descent For Machine Learning

http://machinelearningmastery.com/gradient-descent-for-machine-learning/

Revisiting Distributed Synchronous SGD

Accelerate Training

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

Image Data Augmentation

DataAugmentation ver1.0: Image data augmentation tool for training of image recognition algorithm

Caffe-Data-Augmentation: a branc caffe with feature of Data Augmentation using a configurable stochastic combination of 7 data augmentation techniques

Papers

Scalable and Sustainable Deep Learning via Randomized Hashing

Tools

pastalog: Simple, realtime visualization of neural network training performance

torch-pastalog: A Torch interface for pastalog - simple, realtime visualization of neural network training performance

Training Deep Neural Networks的更多相关文章

  1. Training (deep) Neural Networks Part: 1

    Training (deep) Neural Networks Part: 1 Nowadays training deep learning models have become extremely ...

  2. CVPR 2018paper: DeepDefense: Training Deep Neural Networks with Improved Robustness第一讲

    前言:好久不见了,最近一直瞎忙活,博客好久都没有更新了,表示道歉.希望大家在新的一年中工作顺利,学业进步,共勉! 今天我们介绍深度神经网络的缺点:无论模型有多深,无论是卷积还是RNN,都有的问题:以图 ...

  3. 论文翻译:BinaryConnect: Training Deep Neural Networks with binary weights during propagations

    目录 摘要 1.引言 2.BinaryConnect 2.1 +1 or -1 2.2确定性与随机性二值化 2.3 Propagations vs updates 2.4 Clipping 2.5 A ...

  4. 论文翻译:BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or −1

    目录 摘要 引言 1.BinaryNet 符号函数 梯度计算和累积 通过离散化传播梯度 一些有用的成分 算法1 使用BinaryNet训练DNN 算法2 批量标准化转换(Ioffe和Szegedy,2 ...

  5. 为什么深度神经网络难以训练Why are deep neural networks hard to train?

    Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...

  6. This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem

    The unstable gradient problem: The fundamental problem here isn't so much the vanishing gradient pro ...

  7. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  8. [Box] Robust Training and Initialization of Deep Neural Networks: An Adaptive Basis Viewpoint

    目录 概 主要内容 LSGD Box 初始化 Box for Resnet 代码 Cyr E C, Gulian M, Patel R G, et al. Robust Training and In ...

  9. On Explainability of Deep Neural Networks

    On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...

随机推荐

  1. JAVA虚拟机类型转换学习

    Java虚拟机包括血多进行基本类型转换工作的操作码,这些执行转换工作的操作码后面没有操作数,转换的值从栈顶端获得.Java虚拟机从栈顶端弹出一个值,对它进行转换,然后再把转换结果压入栈.进行int.l ...

  2. Logistic Regression逻辑回归

    参考自: http://blog.sina.com.cn/s/blog_74cf26810100ypzf.html http://blog.sina.com.cn/s/blog_64ecfc2f010 ...

  3. 3、Linux 获取帮助的方法-关机命令-7个系统启动级别

    1.获取帮助的方法: (1).命令 -h 或--help (2).man man 命令  --->/user 查看user选项 /选项 ---->n 查看下一项 2.关机命令 (1).sh ...

  4. mac java目录

    /Library/Java/JavaVirtualMachines/jdk1.7.0_10.jdk/Contents/Home mac java的安装目录为 /Library/Java/JavaVir ...

  5. Thinkphp源码分析系列(二)–引导类

    在上一章我们说到,ThinkPHP.php在设置完框架所需要的变量和调教好环境后,在最后调用了  Think\Think::start();  即Think命名空间中的Think类的静态方法start ...

  6. AX 4.0 调用打印设定的功能

    PrintJobSettings printJobSettings; PrintJobSettings printJobSettings2; Boolean ok; container packPri ...

  7. 【python】django-celery 实现django项目定时任务

    官方:https://pypi.python.org/pypi/django-celery/ 参考:http://www.weiguda.com/blog/73/ 参考:http://www.liao ...

  8. 系统中定义VOMapping的时候注意大小写

    VOMapping中的第一个参数一定要严格按照大小写(缩写的单词容易错): 例如:VO中的定义: private CntVOEnums.EnumIVRStage ivrStage; 实际反射找这个字段 ...

  9. SQL笔记-第七章,表连接

    SQL中使用JOIN 关键字来使用表连接.表连接有多种不同的类型,被主流数据库系统支持的有交叉连接(CROSS JOIN).内连接(INNER JOIN).外连接(OUTTER JOIN),另外在有的 ...

  10. COOKIE&&SESSION

    ---------------------------------------------------------------------------COOKIE------------------- ...