Overview

In the previous sections, you constructed a 3-layer neural network comprising an input, hidden and output layer. While fairly effective for MNIST, this 3-layer model is a fairly shallow network; by this, we mean that the features (hidden layer activations a(2)) are computed using only "one layer" of computation (the hidden layer).

In this section, we begin to discuss deep neural networks, meaning ones in which we have multiple hidden layers; this will allow us to compute much more complex features of the input. Because each hidden layer computes a non-linear transformation of the previous layer, a deep network can have significantly greater representational power (i.e., can learn significantly more complex functions) than a shallow one.

Note that when training a deep network, it is important to use a non-linear activation function in each hidden layer. This is because multiple layers of linear functions would itself compute only a linear function of the input (i.e., composing multiple linear functions together results in just another linear function), and thus be no more expressive than using just a single layer of hidden units.

Advantages of deep networks

The primary advantage is that it can compactly represent a significantly larger set of fuctions than shallow networks. Formally, one can show that there are functions which a k-layer network can represent compactly (with a number of hidden units that is polynomial in the number of inputs), that a (k − 1)-layer network cannot represent unless it has an exponentially large number of hidden units.

By using a deep network, in the case of images, one can also start to learn part-whole decompositions. For example, the first layer might learn to group together pixels in an image in order to detect edges (as seen in the earlier exercises). The second layer might then group together edges to detect longer contours, or perhaps detect simple "parts of objects." An even deeper layer might then group together these contours or detect even more complex features.

Finally, cortical computations (in the brain) also have multiple layers of processing. For example, visual images are processed in multiple stages by the brain, by cortical area "V1", followed by cortical area "V2" (a different part of the brain), and so on.

Difficulty of training deep architectures

The main learning algorithm that researchers were using was to randomly initialize the weights of a deep network, and then train it using a labeled training set using a supervised learning objective, for example by applying gradient descent to try to drive down the training error. However, this usually did not work well. There were several reasons for this.

Availability of data
Local optima
Diffusion of gradients

when using backpropagation to compute the derivatives, the gradients that are propagated backwards (from the output layer to the earlier layers of the network) rapidly diminish in magnitude as the depth of the network increases. As a result, the derivative of the overall cost with respect to the weights in the earlier layers is very small.(深度神经网络的前几层) Thus, when using gradient descent, the weights of the earlier layers change slowly, and the earlier layers fail to learn much. This problem is often called the "diffusion of gradients."

A closely related problem to the diffusion of gradients is that if the last few layers in a neural network have a large enough number of neurons, it may be possible for them to model the labeled data alone without the help of the earlier layers. Hence, training the entire network at once with all the layers randomly initialized ends up giving similar performance to training a shallow network (the last few layers) on corrupted input (the result of the processing done by the earlier layers).

Greedy layer-wise training

the main idea is to train the layers of the network one at a time, so that we first train a network with 1 hidden layer, and only after that is done, train a network with 2 hidden layers, and so on. At each step, we take the old network with k − 1 hidden layers, and add an additional k-th hidden layer (that takes as input the previous hidden layer k − 1 that we had just trained). Training can either be supervised (say, with classification error as the objective function on each step), but more frequently it is unsupervised (as in an autoencoder; details to provided later). The weights from training the layers individually are then used to initialize the weights in the final/overall deep network, and only then is the entire architecture "fine-tuned" (i.e., trained together to optimize the labeled training set error).

Availability of data

While labeled data can be expensive to obtain, unlabeled data is cheap and plentiful. The promise of self-taught learning is that by exploiting the massive amount of unlabeled data, we can learn much better models. By using unlabeled data to learn a good initial value for the weights in all the layers (except for the final classification layer that maps to the outputs/predictions), our algorithm is able to learn and discover patterns from massively more amounts of data than purely supervised approaches. This often results in much better classifiers being learned.

Better local optima

After having trained the network on the unlabeled data, the weights are now starting at a better location in parameter space than if they had been randomly initialized. We can then further fine-tune the weights starting from this location. Empirically, it turns out that gradient descent from this location is much more likely to lead to a good local minimum, because the unlabeled data has already provided a significant amount of "prior" information about what patterns there are in the input data.

Deep Networks : Overview的更多相关文章

  1. Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)

    前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...

  2. Initialization of deep networks

    Initialization of deep networks 24 Feb 2015Gustav Larsson As we all know, the solution to a non-conv ...

  3. 基于pytorch实现HighWay Networks之Train Deep Networks

    (一)Highway Networks 与 Deep Networks 的关系 理论实践表明神经网络的深度是至关重要的,深层神经网络在很多方面都已经取得了很好的效果,例如,在1000-class Im ...

  4. 论文笔记:SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks

    SiamRPN++: Evolution of Siamese Visual Tracking with Very Deep Networks 2019-04-02 12:44:36 Paper:ht ...

  5. 论文笔记:Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks ICML 2017 Paper:https://arxiv.org/ ...

  6. 【DeepLearning】Exercise: Implement deep networks for digit classification

    Exercise: Implement deep networks for digit classification 习题链接:Exercise: Implement deep networks fo ...

  7. 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks

    In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...

  8. Deep Networks for Image Super-Resolution with Sparse Prior

    深度学习中潜藏的稀疏表达 Deep Networks for Image Super-Resolution with Sparse Prior http://www.ifp.illinois.edu/ ...

  9. Training Very Deep Networks

    Rupesh Kumar SrivastavaKlaus Greff ̈J urgenSchmidhuberThe Swiss AI Lab IDSIA / USI / SUPSI{rupesh, k ...

随机推荐

  1. rails 开发随手记 9

    ruby 根据名称确定类Object.const_get 一个简单的应用,在header中的,个人信息链应该链接到对应的用户类型的页面上. <%= link_to "个人信息" ...

  2. 认识javascript的引擎之--1

    前言: 一:每一款浏览器里面都能执行js脚本,那是因为制造商在浏览器里面加入了js引擎.也就是说js引擎在浏览器里面占有一席之地. 1.开始的时候js处于沉睡状态,直到运行页面遇到 <scrip ...

  3. java必会的英语单词

    Java开发常用英语单词表   第一章:public['pʌblik] 公共的,公用的static['stætik] 静的;静态的;静止的void:[vɔid] 空的main:[mein] 主要的 重 ...

  4. 阿里云安装mysql数据库出现2002错误解决办法

    在安装数据库的时候出现了如下错误: 解决办法如下: 1.在bin目录下 输入:kill -s 9 9907 再输入:ps -ef|grep mysql 显示如下: 2.回到lampp目录下,重启数据库 ...

  5. Android chromium 2

    Overview JNI (Java Native Interface) is the mechanism that enables Java code to call native function ...

  6. TP5 安装

    一.官方手册: https://www.kancloud.cn/manual/thinkphp5/118003 二.Git 方式安装[最新框架下载方式] 首先克隆下载应用项目仓库 git clone ...

  7. keytool常用操作

    keytool 秘钥需要存储在秘钥库中,秘钥库可以理解为一个存储了一个或多个秘钥的文件.一个秘钥库可以存储多个密钥对,每个秘钥对你都需要给他们取一个名字. D:\software\Java\jdk1. ...

  8. centeros 7配置mailx使用外部smtp服务器发送邮件

    发送邮件的两种方式: 1.连接现成的smtp服务器去发送(此方法比较简单,直接利用现有的smtp服务器比如qq.新浪.网易等邮箱,只需要直接配置mail.rc文件即可实现) 2.自己搭建私有的smtp ...

  9. HDU 5068 Harry And Math Teacher 线段树+矩阵乘法

    题意: 一栋楼有n层,每一层有2个门,每层的两个门和下一层之间的两个门之间各有一条路(共4条). 有两种操作: 0 x y : 输出第x层到第y层的路径数量. 1 x y z : 改变第x层 的 y门 ...

  10. 对spring默认的单列模式的理解

    我想大部分朋友对spring的单例模式都比較了解. 什么叫单例模式呢,顾名思义就是无论有多少个请求,都仅仅生成一个实例. 比方在spring中a,b请求都在调用同样的bean ,他们都是调用的同一个b ...