大师Geoff Hinton关于Deep Neural Networks的建议
大师Geoff Hinton关于Deep Neural Networks的建议
Note: This covers suggestions from Geoff Hinton’s talk given at UBC which was recorded May 30, 2013. It does not cover bleeding edge techniques.
主要分为如下几点展开:
- Have a Deep Network.
1-2个hidden layers被认为是一个shallow network,浅浅的神经网络,当hidden layers数量多时,会造成local optima,缺乏数据等。
因为deep neural network相比shallow neural network,最大的区别就是greater representational power,这个能力随着layer的增加而增加。
PS:理论上,只有一个单层hidden layer但是有很多unit的神经网络(large breadth,宽度,not deep),具有与deeper network相似的representational power,但是目前还不知道有哪种方法来训练这样的network。
- Pretrain if you do not have a lot of unlabelled training data. If you do skip it.
pre-training 又叫做greedy layer-wise training,如果没有足够的标签样本就需要执行greedy layer-wise pretraining,如果有足够多的样本,只需执行正常的full network stack 的训练即可。
pre-training可以让parameters能够站在一个较好的初始值上,当你有足够的无标签样本时,这一点就无意义了。
Side Note: An interesting paper shows that unsupervised pretraining encourages sparseness in DNN. Link is here.
- Initialize the weight to sensible values.
可以将权重设置为小的随机数,这些小随机数权重的分布取决于在network中使用的nonlinearity,如果使用的是rectified linear units,可以设置为小的正数。
- Use rectified linear units.
可以参看我的博文《修正线性单元(Rectified linear unit,ReLU)》
It makes calculating the gradient during back propagation trivial. It is 0 if x < 0 and 1 elsewhere. This speeds up the training of the network.
![]()
ReLU units are more biologically plausible then the other activation functions, since they model the biological neuron’s responses in their area of operation. While sigmoid and tanh activation functions are biologically implausible. A sigmoid has a steady state of around 12 and after initlizing with small weights fire at half their saturation potential.
- Have many more parameters than training examples.
确保整个参数的数量(a single weight in your network counts as one parameter)超过训练样本的数量一大截,总是使得neural network overfit,然后强力的regularize它,比如,一个例子是有1000个训练样本,须有1百万个参数。
这样做的理由是模仿大脑的机制,突触的数量要比经验多得多,在一次活动中,只不过大部分都没有激活。
- Use dropout to regularize it instead of L1 and L2 regularization.
dropout是一项用来在一个隐含层中丢掉或者遗漏某些隐含单元的技术,每当训练样本被送入network时就发生。随机从隐含层中进行子采样。一种不同的架构是all sharing weights。
这是一种模型平均或者近似的形式,是一种很强的regularization方法,不像常用的L1或者L2 regularization将参数拉至0,subsample或者sharing weights使参数拉至合理的值。比较neat。
- Convolutional Frontend (optional)
如果数据包含任何空间结构信息,比如voice,images,video等,可以使用卷积前段。
可以参看我的博文《卷积神经网络(CNN)》
卷积可以看作诗一个滤波器,算子等,可以从原始的pixel等中抽取边缘等特征,或者表示与卷积核的相似度等等。采用卷积可以对空间信息进行编码。
参考文献:
http://343hz.com/general-guidelines-for-deep-neural-networks/
2015-9-11 艺少
大师Geoff Hinton关于Deep Neural Networks的建议的更多相关文章
- [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
About this Course This course will teach you the "magic" of getting deep learning to work ...
- On Explainability of Deep Neural Networks
On Explainability of Deep Neural Networks « Learning F# Functional Data Structures and Algorithms is ...
- Classifying plankton with deep neural networks
Classifying plankton with deep neural networks The National Data Science Bowl, a data science compet ...
- (Deep) Neural Networks (Deep Learning) , NLP and Text Mining
(Deep) Neural Networks (Deep Learning) , NLP and Text Mining 最近翻了一下关于Deep Learning 或者 普通的Neural Netw ...
- Must Know Tips/Tricks in Deep Neural Networks
Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei) Deep Neural Networks, especially C ...
- Must Know Tips/Tricks in Deep Neural Networks (by Xiu-Shen Wei)
http://lamda.nju.edu.cn/weixs/project/CNNTricks/CNNTricks.html Deep Neural Networks, especially Conv ...
- (转)Understanding, generalisation, and transfer learning in deep neural networks
Understanding, generalisation, and transfer learning in deep neural networks FEBRUARY 27, 2017 Thi ...
- 为什么深度神经网络难以训练Why are deep neural networks hard to train?
Imagine you're an engineer who has been asked to design a computer from scratch. One day you're work ...
- 论文翻译:2018_Source localization using deep neural networks in a shallow water environment
论文地址:https://asa.scitation.org/doi/abs/10.1121/1.5036725 深度神经网络在浅水环境中的源定位 摘要: 深度神经网络(DNNs)在表征复杂的非线性关 ...
随机推荐
- web表单
1.配置 使用Flask-WTF, 它集成了WTForms并且完美地集成到了flask. 在microblog根目录下创建一个文件,存储flask扩展的所有配置,CSRF_ENABLED用于激活跨站点 ...
- 洛谷 P1230 智力大冲浪 题解
P1230 智力大冲浪 题目描述 小伟报名参加中央电视台的智力大冲浪节目.本次挑战赛吸引了众多参赛者,主持人为了表彰大家的勇气,先奖励每个参赛者 \(m\)元.先不要太高兴!因为这些钱还不一定都是你的 ...
- 最长公共子序列 DP
class Solution: def LCS(self,A,B): if not A or not B: #边界处理 return 0 dp = [[0 for _ in range(len(B)+ ...
- 2019暑期金华集训 Day7 动态规划
自闭集训 Day7 动态规划 LOJ6395 首先发现这个树的形态没啥用,只需要保证度数之和是\(2n-2\)且度数大于0即可. 然后设\(dp_{i,j}\)表示前\(i\)个点用了\(j\)个度数 ...
- SQL数据清洗
大家好,我是jacky,很高兴继续跟大家分享<MySQL数据分析实战>,从本节课程开始,我们的课程就会变得越来越实战,也会越来越有意思了: 我们课程的主体叫MySQL数据分析实战,那我们用 ...
- Samba文件共享服务设置
SMB的主程序 smbd:SMB-TCP139,CIFS-TCP445 nmbd:NetBios-UDP137,138 SMB主程序对应的两个服务 /etc/init.d/smb /etc/init. ...
- Java HashSet介绍
HashSet底层使用HashMap实现.当使用add方法将对象添加到Set当中时,实际上是将该对象作为底层所维护的Map对象的key,而value则都是同一个Object对象(该对象我们用不上). ...
- linux 中 scp 命令
scp命令用于Linux 之间复制文件和目录.如果想在windows 环境中使用需要安装 linux 命令环境,比如 cmder scp是 secure copy的缩写, scp是linux系统下基于 ...
- leetcode 361.Bomb Enemy(lintcode 553. Bomb Enemy)
dp 分别计算从左到右.从右到左.从上到下.从下到上4个方向可能的值,然后计算所有为‘0’的地方的4个方向的值的最大值 https://www.cnblogs.com/grandyang/p/5599 ...
- flutter 数据存储 SP和sqlite
添加插件: shared_preferences: ^0.4.2 path_provider: ^1.2.0 sqflite: ^0.12.0 import 'dart:async'; import ...